Jump to content


From Meta, a Wikimedia project coordination wiki

This page collects possible future use cases. Please describe here how you would like to use Wikidata, or what functionality you want to see added to it in the future. Give examples! Feel free to expand and discuss, add questions, make sub pages, etc.

Georreferenced data in scientific papers


As far as I know, georreferenced data in scientific papers, is not centralized or uploaded anywhere.

Using WikiData to centralize and format that kind of information could be useful, as searching between hundreds of appendices, tables, or texts, all in varying formats, limits the use of the information to researchers in the field. While that data could be used in Wikipedia, Wikispecies or future Wikipedia projects as well as general and open use.(LeónHormiga (talk) 22:31, 27 July 2020 (UTC))[reply]


  • Comparison articles: Automatic generation of list articles is frequently named as one of the main use cases of Wikidata. There is, however, a particular type of list articles for which Wikidata may be even more useful: Comparison articles. You can find many examples of these at en:Category:Software comparisons. It will be useful, for example, to query for software of a particular kind (for example, a database server or a virtual machine), with particular license (or group of licenses), released in the last year, and having particular features.
  • Sharing mulitlingual tables between projects.
  • Automatically producing tables in a similar fashion as list articles. There are a lot of other list articles which have tables with a line for each item. King lists, sports results, etc. If each item has a Wikidata page then it should be possible to populate the table with info from Wikidata.
  • Automatically compiling a table showing the content from all infoboxes that are based on a certain template. It should be easy to filter, sort and search in the table. One aim is to check the templates for errors, another to produce tables useful in articles.
  • Trusted users should easily be able to edit or import a whole column or table into wikidata, and thus revise many templates at once. It should be possible to easily en regularly import whole tables in many alternative formats, from a reliable web site. One approach might be to use an open source online spreadsheet tool, which also may be used for automatically producing plots and charts.
  • Linking time-variant numerical data in the body text, for example population numbers, with other language versions of the article, and with data in info boxes and tables.
  • Simple but secure uploading of a new version of a large data file, e.g. a spread sheet, from a statistical source to the data wiki, for further update of a large amount of articles.
  • Maps showing positions of for example all universities or all churches (that have Wikipedia articles) in a certain region.
  • Automatically generated charts and plots, for example showing the population of all cities in a certain region.
  • Provide a central repository for citation data via normal Wikidata entries on publications. Through article-side inclusion syntax such as {{#property:P50|id=Q#######}}, Wikipedia templates similar to {{Sfn}} could automatically assemble citation information with minimal input from the user.


  • Wikidata will contain bibliographic records as part of the system for providing source references for individual statements.
  • Such bibliographic records could be collected and curated in topic specific lists
  • Community curated descriptions and evaluations of individual sources would be very helpful too.
  • Would it make sense to bulk import bibliographic records, or is it better to import individual records on demand?
  • Note that information about Works can be modeled as data items, but information about manifestations (see en:FRBR) are (usually?) fixed records from an authoritative source that should rarely if ever change.
  • Interesting in this context: Linking Library Data to Wikipedia YouTube video by OCLC


See also Wikidata:Wikisource
  • Wikisource needs meta-data about the documents being transcribed. Currently, this info needs to be kept in sync between the file description page on commons.
  • This information will usually be on the manifestations level (see en:FRBR, compare Bibliograpies) and would reference data items for the respective Work and Author. Information on the Item or file level (i.e. about the scan itself) would be maintained on Commons.

Notes from the Wikimania 2012 unconference session on cataloging in Wikisource (see the Wikisource Roadmap here)

I want to build a example to walk through, but I will leave a quick summary. First stage is can involves an improvememt to index pages already in the works Much meta-data can brought in from the djvu file. Tpt is working on this. The idea seems to be much of this form is filled in with computer magic and the Wikisourcerer will check it over and save. This simplifies the creation of index page (a current pain point) and with the surrounding tools Tpt has in place the API will harvest the metadata. This good in many way but only work at the volume level (imagine I clarified with fancy librarian term or for precision replace this with it). Sometimes the volume is not the best place to keep meta-data. An anthology of poetry can many poems by many poets on many subjects with non-unique titles.
The second phase is to have a gadget that deposit a template of meta-data on a wikipage. Where a wikipage contains a single poem from a volume of poetry, the meta-data on the in the reading namespace will be more useful than that on the index page ascribed to the volume. It should look familiar to the index page input form for usability. Now you have probably been thinking only of the meta-data generally gathered in card catalogs. We can actually gather more bibliographic data than that, we are not paper. As long as we using a form to collect the basic information, we might as well ask the curator what the work is about (dc subject). We will ask that the choice be defined by Wikipedia urls rather than plain text. Which Alexandre Dumas is the author? http://en.wikipedia.org/wiki/Alexandre_Dumas,_père (imagine however as the wikidata identity URL not the textual title of the moment) What is it about? http://en.wikipedia.org/wiki/Haarlem http://en.wikipedia.org/wiki/Rampjaar http://en.wikipedia.org/wiki/Cultivar http://en.wikipedia.org/wiki/Imprisonment (again imagine the wikidata identity URL not the textual title of the moment) http://catalog.loc.gov/cgi-bin/Pwebrecon.cgi?SC=Subject&SA=Tulip%20Mania%2C%201634%2D1637%20Fiction%2E&PID=Uo4cpUgPWiQ_SX7DmfXDBsEY2DCpX&BROWSE=1&HC=28&SID=4 (but imagine or fix using the URL that use LOC magic numbers). . . If there is no Wikipedia article for the label hold off during stage 2. The gadget deposit this all as a template like the stage 1 index page template. The API can do magic with the templates.
The third phase is Wikidata harvesting and incorporating all the data from the templates and superseding them and the earlier API interface. Now an extention is written to replace the template-creating gadget and directly submit the labels to WikiData. For the curators inputting labels data transfer should be a near seamless transition done by bots and the extention should be designed around the workflow discoveries made whe using the gadget. The biggest difference will be that now when there is no Wikipedia article to link to for a label, the curator makes a WikiData stub. A Wikidata stub is in functionally Wikipedia redlink with the improvement of being associated with blurb useful for clarification and disambiguation if needed. Of course, now the labels placed in earlier stages and in future can be used for all kinds of Wikidata magic. I will work the example of what would then be possible to associate in very meaningful but currently unavailable ways in the coming weeks. Please edit the above as especially fix the terminology that is lacking.


See also phab:T19503
See also d:Wikidata:Wikimedia Commons, d:Wikidata:Project chat/Archive/2013/03#Sister projects links and data
See also m:Beyond categories
  • Of course we need sitelinks to Commons, for Commons interwikis + {{commonscat}} and friends.
  • The information in the Creator namespace could be transferred to Wikidata, where it could be maintained for use on Commons, Wikipedia, and other projects.
  • The same could be done for the information within the Book templates: that would store all the data used in Wikisource.
  • Institution templates may draw information from Wikidata
  • License templates may draw information from Wikidata, though that kind of info tends to be "primary", unsourced and uncontested.
  • The "information" templates on the file description pages can be replaced by structured data records maintained as subpages or "attachments" to the page. The mechanisms for storing and editing them would be very similart to the ones used on the Wikidata project, and they may be implemented by the same extension (Wikibase) or another extension building on top of Wikibase.
  • Such meta-info about files often mixes information on the file, the item and possibly the manifestation level (see en:FRBR). Most of it will be primary, unsourced and uncontested, but in some cases, it may be useful to be able to provide references.
  • The ability to have WikiData items that are pictures etc. on Commons. Then properties such as depicts, location, type(image, video etc.) or composition(portrait etc.) could be set. This would allow more powerful Commons searches i.e. "Portraits [compostion>portrait] of Barrack Obama[depicts>Barack Obama] taken before 2008 [date created > before 2008]" etc. without only searching the title and description.

Where an item on Wikidata corresponds to a category (=gallery) on Commons we should have a link to that category. It is probably useful to also have a link to one canonical photo for that item - the photo which Wikidata suggests be used in infoboxes.

Authoritative Records


Several ideas described above use data records that, unlike the data items corresponding to Wikipedia articles, contain primary, uncontested and unsourced information. Examples:

  • Bibliographic data from an authoritative source like a library
  • File meta data maintained by the wiki community.

In this cases, properties have one single definite value, and provenance information can be given on the record level instead of individual statements.


  • should such records use the same data model as full fledged data items?
  • should they be in a separate namespace?
  • is the distinction really this clear cut, in which cases could both types of information (sourced and contested vs authoritative or editorial) be mixed?
  • would it make sense to mark some properties as editorial? or some usages of properties?



Database for an international and multilingual Watersport-Wiki:

  • Harbours
  • List of Lights
  • Dive spots
  • Whitewater sections



Development overview

  • Individual quotes as data items.
    • Different "levels"/versions for omissis.
    • Possibility to associate a quotation to one or more quotations which are its translation in another language. This would allow
    • This is the main content work for a Wikiquote, quotations also need to be checked for accuracy, trimmed if necessary etc.
  • Associated to their (authoritative) metadata, such as author(s), translator(s), curators, title, publisher etc., possibly even copyright status.
    • Could intersect with Wikisource data to automatically redirect to the source context if available.
    • This is supposed to be reasonably stable.
  • Curation work: this is the presentation of quotations and the main job of Wikiquote after selection. Needs to be very flexible.
    • Quotations tagged by time, topic etc.
    • Automatic lists fetching from the above.
  • More sisterproject coordination.
    • A Wiktionary entry might use a Wikiquote quotations as example.
    • Wikiquote could fetch some data to locally provide or automatically link context information on Wikipedia to explain quotations.



I would suggest to discuss this on Wiktionary_future (or just look at d:Wikidata:Wiktionary before) --Nobelium (talk) 19:59, 2 May 2013 (UTC)[reply]

Should be able to adapt for lot of the basics things and facts about words concerning:

  • Etymology:
    • origin language with native term
    • first attestation (date + citation + reference)
    • referenced etymology itself ?
  • Grammar:
    • word's gender (useful for some languages, like French)
    • way of inflection (e.g. de-conj-weak for a German weak verb)
    • word inflections (conjugation and declension e.g. singular, plural, masculine, feminine form)
  • Pronounciation:
    • IPA / X-SAMPA
    • homophone
    • audio illustrations
  • Other informations:
    • variants
    • false friend
    • derivatives
    • international derivatives
    • anagrams
    • synonyms
Discuss the components at Wikidata/Property proposal!

I'm maybe too much enthusiastic but centralize all this things sound good to me, what do you thinks ? V!v£ l@ Rosière /Murmurer…/ 10:27, 11 September 2012 (UTC)[reply]

Support Support. It's annoying that the wiktionary is so inefficent and you have to copy the facts/information through the wiktionaries and update it all the time. In practice the last point is very rare I think. @synonyms: Perhaps it will be possible to involve such projects like https://www.openthesaurus.de/ which provides obviously (as you can see in the domain name) synonym lists which are furthermore downloadable. That's useful for example in the dictionary http://www.dict.cc/ (Especially for the german words) or as a libreoffice/openoffice.org extention. But I know that this may take a long time to develop. So this improvement may be a great step towards a more efficient wiktionary and another step towards http://www.omegawiki.org/ --Nobelium (talk) 00:12, 3 October 2012 (UTC)[reply]
Strong Support Support, especially for grammar and pronunciation issues. It would improve the development of Wiktionaries greatly! Infovarius (talk) 18:33, 2 November 2012 (UTC)[reply]
I'm a bit skeptical to doing all of that automatically. Some projects might have different standards regarding IPA / X-SAMPA (enwikt uses ɹ instead of r for English words). Some have different standards regarding derivatives (are hyponyms/hypernyms listed separately or not?). They may list different declensions (e.g. regarding whether old forms are included or not, or how). That said - many things have potential if done well!
One thing I'd definitely add is iw-links - not sure why they are not there already. This seems like the simplest use case for Wiktionary, since it's just identical article names. //Shell 15:10, 16 November 2012 (UTC)[reply]
Support Support, of course. --Dalton2 (talk) 21:13, 16 November 2012 (UTC)[reply]
Support Support 1000%. It's useful not only for wikt, but also for many other projects. Russian Wikipedia has a bulky w:ru:Template:Локатив, which does what russian {{grammar:}} function yet can not do for putting placenames in locative in automatic forming person's birth/death-place category. But even if we train the MediaWiki better, many placenames are irregular and should be kept in dictionary. Putting them in {{#switch}}-based ts will cause many pages to regen each time adding one more item; currently, we have to spcfy locative manually in template on each such page. Ignatus (talk) 18:27, 5 December 2012 (UTC)[reply]
In multilingual contect, I Support Support a system where you can enter translation to various languages, as well as the meaning of the "exact" word in various languages. Think Omegawiki. Quote: "By December 2004 the basic functionalities were clear: an extension for the Mediawiki software, building on the Wikidata project." Bennylin 18:40, 18 December 2012 (UTC)[reply]
Oppose Oppose All of the above are useful features but they should be added to Wiktionary. They don't belong in Wikidata.Filceolaire (talk) 01:12, 13 January 2013 (UTC)[reply]
Support Support
  • Other informations:
    • figure
It will be great, if a figure will be connected to the specific meaning/definition (not to to the whole entry). E.g. see entry sock with figure for the first definition in English Wiktionary. See entries доска and кольцо (nouns), идти and писать (verbs) in Russian Wiktionary with figures for each many meanings/definitions. -- Andrew Krizhanovsky (talk) 19:16, 12 February 2013 (UTC)[reply]

Where there is a Wikidata page which corresponds to a word on Wiktionary then there may be a case for adding an infobox to Wiktionary with some of the Wikidata info.

Neutral Neutral. I think it will be very hard to use Wikidata for Wiktionaries. Wiktionary items are not about concepts but about words. An additional challenge is that Wiktionary can hold a word in many languages on one page. Sitelinks might work if Wikidata holds separate items for each Wiktionary word. HenkvD (talk) 19:02, 9 March 2013 (UTC)[reply]
Support Support Strong support! The most important information for my use case is to provide a machine-readable dictionary in Wikidata. It's currently hard to extract a machine-readable dictionary from the various Wiktionaries since they don't share the same rules, the data isn't structured and there are many exceptions to deal with. What would be useful would be: given a word in langage A, give a list of translations in langage B. (It's not the only way people use Wiktionary, see this Google Scholar search for reference: http://scholar.google.com/scholar?q=wiktionary) (Is OmegaWiki a separate project? It could be a starting point, I guess.) Quentin Pradet (talk) 12:35, 29 March 2013 (UTC)[reply]
Support Support Almost everything on Wiktionary is raw data. Maybe different prefix than Q for this? 23PowerZ (talk) 00:38, 9 April 2013 (UTC)[reply]
Support Support I think that a good idea it's feasible to adopt Wiktionary to Wikidata by creating a new namescace (or prefix), Insteed of Q, D for Dictionary. For example the english word "green" could get D1. --Bigbossfarin (talk) 22:13, 5 August 2013 (UTC)[reply]
Strong Oppose Oppose for now I'm afraid I'm going to have to agree with -sche's comments at Wiktionary future#-sche's comments (and replies to them) that, both of us speaking as members of the Wiktionary community, the current implementation of Wikidata is unsuited to meet the demands of each different Wiktionary edition's standards, even for something which on-the-surface seems simple like the task of integration. Wiktionary's stated mission is "all words in all languages" and because there are different varying notions of what constitutes "all", "words", and "languages" for each different Wiktionary, we cannot as yet expand the scope of Wikidata to accommodate all of them. An important point raised there which is also relevant to here is that the concept of grammar, syntax, even etymologies and pronunciations are not consistent across all Wiktionaries, which cannot be covered by Wikidata - the best example given is -sche's comment that "what English grammarians think of as an "adjective" roughly maps to at least three different parts of speech in Japanese (形容詞 [keiyōshi], 形容動詞 [keiyō dōshi], and 連体詞 [rentaishi])". How will Wikidata account then, for the representation of "adjective" as a grammatical category in Japanese, or will it again be dominated by English speaker standards and concepts? I'm sorry, but the basic fact remains that all Wiktionaries and their databases are tailored to the unique needs of each particular language, and not like any other, that they should be integrated or that they can be translated with sufficient accuracy across a Wikimedia project to represent the same concepts.
And let me emphasize one last thing. This discussion is not a simple "support" or "oppose" vote. Neither Meta nor Wikidata can decide for the Wiktionary community whether we want integration or not. That is something we decide for ourselves, and project autonomy and self-determination must be maintained on principle, lest we fear the authoritarian grip on project communities by ambitious sister projects. TeleComNasSprVen (talk) 18:38, 4 February 2014 (UTC)[reply]
Lastly, I emphasized for now because Wikidata is willing but not capable of proving transition is smooth enough in the current state to preserve the information all Wiktionaries currently store on their respective databases, but should that change in the future and any of the previous concerns addressed, I may be willing to support. Grammatical categories are only one piece of the bigger picture. TeleComNasSprVen (talk) 18:41, 4 February 2014 (UTC)[reply]



To represent the taxonomic hierarchy, Wikispecies currently uses nested Templates, one nesting level for each hierarchy level. While this is working solution that guarantees consistency, contrary to the mission of Wikispecies the knowledge expressed in this structure is not reusable by the Wikipedias. Similarly other information in Wikispecies could be more easily reused by Wikipedia and other Open Knowledge projects if most or all of it would be migrated to Wikidata.

For both Wikipedia and Wikispecies, the new Wikidata could provide fascinating opportunities for recording knowledge about organism properties (descriptions how they look, behave, etc.) in a structured form. Most Wikipedias present such information in non-structured form; however for example fungi on the english Wikipedia provide a "(myco)morphobox" (e.g. [1]). Expressing organism properties (also known as "characters" or "features") as data is especially useful, because it supports the identification of unknown organisms in the field (field guide, nature guide functionality). Furthermore, the huge work involved in collecting such data can be internationalized.

With this in mind, the present plan of the Wikidata-core may present the following challenges:

  • The selection of properties to be added to any Wikidata item is always from the sum of all Wikidata properties on all items, that is 100s of 1000s, and selection is based on user-knowledge through alphabetical autolookup. Users thus must learn the set of recommended properties for a given class of items by heart.
  • Properties are listed alphabetical and ungrouped, with regard to a logical sequence accessible to the user.
  • No support is planned for selecting specific values through a picklist.
  • Quantitative measurements beyond a single number are not supported (i.e. no statistical measures and ranges like "length (3-) 5-10 (-12) cm"

All problems are generic beyond organism description, but due to the number, diversity, and variability of organisms may present extra challenges there. The first three problems can partially be addressed based on mechanisms provided by Wikidata by annotating Wikidata-properties with additional properties, similar to the type property. These properties could e.g. be a "recommended_for_category", "order_for_category" "group_by_heading", "recommended_value_list". The true challenge would be how to extent the functionality of the base editor with the knowledge provided by such property annotation. While it should not be the responsibility of Wikidata to provide all editing possibilities, it would be welcome if the data editing system is built with such community-driven extensibility in mind.

Quick clarification:
  • We do plan to support "schemas" for specific types of items, like countries, persons, species, etc, so users don't have to hunt for all the "normal" properties for such items every time. This is not finalized yet, but I'm pretty sure there will be some support for this. The simplest way would actually be to have a "create a new item like X" function, which would use all the properties present in X. Another way would be to maintain schemas explicitly as lists of properties.
  • Quantitative measurements will not be represented by a single value, but by a value plus an accuracy (error margin, standard deviation, or something - not sure yet how this will be represented, but it will be there). Additional information can be given using qualifiers. More complex data types for ranges (possibly with fuzzy edges) can be added, but probably won't be part of the baseline implementation.
-- Daniel Kinzler (WMDE) (talk) 16:14, 26 July 2012 (UTC)[reply]
Support Support--Biggerj1 (talk) 07:53, 7 July 2013 (UTC)[reply]


  • Most wikivoyage pages describe a destination city or geographic region. Each destination-level page contains several sections packed with individual listings which strongly resemble database records (names, addresses, co-ordinates, contact info, descriptions). The description text would change from one Wikivoyage language to another, but many of the other fields (such as URL's and telephone numbers) would remain unchanged. Like encyclopaedia infoboxes, the listings are template-like and do have a fixed list of fields... but one Wikivoyage destination page contains multiple listings. K7L (talk) 18:31, 16 March 2013 (UTC)[reply]

Links between pages on the same topic on different Wikimedia sibling projects are not currently handled in the same manner as interlanguage links within the same project. There is a particularly ugly template, wikipedia:Template:Sister project links, which blindly invokes special:search on various sibling projects to try to find articles with the same title. There are various templates such as "commons category" which create a direct link to a sibling, dumping templated boxes into the "external links" section of an article. (one exception: Wikivoyage uses mw:extension:RelatedSites to force links to Wikipedia and Commons into the sidebar.) Unlike the interlanguage links (which have been maintained by pywikipediabot's interwiki.py for a decade), there is no clean method of automatically maintaining these links.

The current Wikidata implementation of Wikipedia interlanguage links needs to be expanded to indicate that the current page has a corresponding Commons category, (if a geographical place) Wikinews category and Wikivoyage tour guide page, (if a plant or animal) a Wikispecies binomial description, (if a historic author) a Wikiquote or Wikisource category... any and all variants of "same topic on another Wikimedia project". Presumably only the ones which match by language (English Wikipedia to English Wikisource), by project (Wikivoyage en français to English Wikivoyage) or which have no language-specific versions (Commons) should be displayed on the individual projects, but the Wikidata should list every page on every language of every project (not just WP) which has the identical topic. This would remove the sibling links from manually-placed template links and move them into automatically updateable sidebar links, as we have always done for languages. K7L (talk) 18:31, 16 March 2013 (UTC)[reply]

World University and School

  • World University and School (http://worlduniversityandschool.org/ and http://worlduniversity.wikia.com/wiki/World_University) would like to use Wikidata for development. In addition to the resources itemized in this WUaS SUBJECT TEMPLATE - http://worlduniversity.wikia.com/wiki/SUBJECT_TEMPLATE - WUaS would also like, as a few highlights, to develop an universal translator with Wikidata in 3,000-8,000 languages, building on Google Translate +. WUaS would also like to incorporate virtual place-coordinates to/from Wikidata for virtual world spaces, and potentially for all museums in all languages which have some free art resources online, for example. And WUaS would like to develop a Music School, for all instruments and in all languages, as wiki pages, and also for collaborative-real time music making, eventually.


  • Add (often-cited) Sources to Wikidata so that you can cite them in Wikipedia with <ref name="#wikidataname"/> or similar in the article. Through this there wouldn't be als those long <ref name="#wikidataname">John Doe, Max Musterman, Max Müller, A long long paper title for this paper about some super duper interessting stuff, Publischng house name, 2012,11 p. 112-123</ref> ritght in the midele of the text in the source-code.
  • Wikidata/Queries
  • Wikidata/Infoboxes
  • Wikidata/Notes/CKAN

External Dataset conversion ideas

  • Standard Industrial Classifcations
  • Legislative cross referencing,
  • Subject codes for OCLC, Authority control cross-referencing..

Market & Economic Data


One existing long term project representing nearly a million data points in a project compatible with Wikimedia licenses is the Grand Exchange Market Watch at the Runescape Wiki. The scope and breadth of this project is hard to underestimate and represents the collection of daily trade values of an in-game commodity exchange of a massive multi-player on line role playing game (MMORPG) involving nearly 3000 different commodities. The information is currently collected through the use of a template hierarchy that allows extraction of various related pieces of information, and that information is also widely used throughout that particular wiki in a number of ways. The original goal was to update the data in one place and have it dispersed where needed... as a template.

Reimplementing this project as a Wikidata project instead would allow access to the historical information and allow the creation of tables and charts that currently require active involvement of bots to perform that same task. One common problem with this project involved typically pushing against template preprocessor memory limits due to template expansion and technically abusing template by implementing what is essentially a wiki as a database.

Similar kinds of projects could easily be done with "real world" commodity or exchange markets (where permission could be granted for Wikimedia projects to store that kind of data) or for other kinds of economic or demographic data that needs to be used simultaneously in multiple places in a wiki. In this case, this particular project could serve as a testbed for a large scale database with real numbers that have been gathered over the course of several years on a daily basis and the source of that data is licensed under open source terms. Indeed the Runescape Wiki in this case has more accurate long term data than even the company who made the game.

Inverse properties


Properties can be meaningfully inverted. For example, if you have the property parent of, the inverse of that property would be child of--if A is the parent of B, then B is the child of A. That means that if you create a statement for item A which says 'parent of B', that should automatically imply the corresponding statement for item B ('child of A').

However, this feature is currently not supported. A workaround could probably be made by creating, for a given property, another property to serve as its inverse and then creating a bot to keep watch over such properties and add statements automatically, but that would be an inferior solution to supporting this feature natively. Supporting this feature would require the ability for properties to have a separate label for the inverse, and also would require either atomically adding an inverse statement to the target item when the user adds a statement to the source item (and equivalently for deletion), or, upon displaying an item, querying all statements to see which of them have that item as the target, and for any matches, dynamically constructing and displaying the inverse statements.

A possible problem I foresee with this feature (as well as with any bot-based workaround) would be that some items could be connected to a very large number of other items which could result in large loading times for the highly connected items and visual clutter when all the statements are displayed. If large loading times occur, a possible solution could be piecewise loading via asynchronous calls as the user progresses through the page. For visual clutter, I envision grouping statements by property into collapsible groups, which would be collapsed by default, possibly still displaying a minimum number of statements in the collapsed state.