Research:Discovering content inconsistencies between Wikidata and Wikipedia/First explorations
There is not much research about the consistency between Wikidata and Wikipedia pages. Moreover, there is no solution to compare such Wikipedia content with Wikidata information in the past. Our goals is to create a mapping between Wikidata and Wikipedia content, and try to measure the consistency between both of them. In this page, we summarize the initial results about trying to utilize the Wikipedia content and their corresponding Wikidata information in Wikipedia to discover the consistency.
There are many different topics of Wikipedia contents, and for this research, we collected the Wikipedia and Wikidata information about biography. For the Wikipedia, we have all the Wikipedia page contents, and for the Wikidata, we have their corresponding properties and values. Since we don’t have the labeling data about the consistency between Wikidata and Wikipedia content and we want to utilize supervised learning methods on this task, we would need to do some data preprocessing first.
- First, due to numerous properties on the Wikidata, we only choose the top5 most frequent meaningful properties: occupation, educated at, place of birth, member of sports team and country of citizenship to be our data. We sampled 2000 Wikidata claims for each property to be our datasets.
- Second, since the Wikipedia content information is too large to compare with one Wikidata statement, we utilize the sentence-based Wikipedia. We focus on whether one Wikidata claim and one Wikipedia sentence in the same page is consistent or not.
- Third, for the labeling, we have some settings. This is a two class classification problem (consistent or not). One example is presented in Table1, and the statistics of datasets is presented in Table2.
Label=0: Consistent. If all the Wikidata values or the aliases words of Wikidata are mentioned in the Wikipedia sentence, we consider they are consistent.
Label=1: Inconsistent. We utilize label=0 data to sample other values of the same property to generate 2 times inconsistent data.
|Consistent:0||Sentence: ' Biography: He was born in Pavlovsky Posad near Moscow.'|
|Wikidata claim: 'place of birth Pavlovsky Posad'|
|Inconsistent:1||Sentence: 'At the age of thirteen, he entered Dulwich College.|
|Wikidata claim: 'educated at Shippensburg University of Pennsylvania'|
|Place of birth||Occupation||Country of citizenship||Educated at||Member of sports team|
Take a look at the Table2, we can clearly find that the property “occupation” is mentioned less in the Wikipedia content, and the property “educated at”, “member of sports team”, and “country of citizenship” are mentioned more time in their corresponding Wikipedia content.
Because our task is to discover the consistency between Wikidata and Wikipedia, We develop a Co-attention+NSMN deep learning model to detect the consistency. NSMN is a submitted paper utilizing the fever datasets. This model consists of four components. The first is Wikidata and sentence encoding: generate the representations of words in the Wikidata and sentence. The second is co-attention mechanisms: capturing the correlation between Wikidata claim and sentence. The third is Neural Semantic Matching Netwrok: another way to capture the correlation between Wikidata claim and sentence. The last is making prediction: generating the detection output by concatenating the co-attention and neural semantic matching network learning representations. The code of model is provided on Github.https://github.com/l852888/Wikipedia-Wikidata-Alignment
Wikidata and Sentence Encoding
The given Wikidata statement and sentence are represented by a word-level encoder. We utilize the pre-trained word embedding GloVe to obtain the initial vector representations of words. Let , be the input vector of wikidata claim and sentence, in which , are the pre-trained word embeddings of the m-th and n-th word, respectively. Then, we create a LSTM layer to learn the correlation between each word, and get the new word embeddings. We denote the Wikidata and sentence representations as and ,where d is the dimensionality of word embeddings.
We think the consistency can be unveiled through investigating which parts of the Wikidata statement are concerned by which parts of the sentence, and can through this deep learning method to learn the correlation between Wikidata and sentence. Therefore, we develop a co-attention mechanism to model the mutual influence between Wikidata claim (i.e., ) and sentence (i.e., ).
We first compute a proximity matrix as: , where is a matrix of learnable parameters. By treating the proximity matrix as a feature, we can learn to predict Wikidata and sentence attention maps, given by
Where , are matrices of learnable parameters. The proximity F can be thought to transforming Wikidata claim to sentence attention space. Then we can generate the attention weights of Wikidata claim and sentence through the softmax function:
Where , are vectors of attention probabilities for each word in Wikidata statement and each word in sentence, respectively. Eventually we can generate the attention vectors of Wikidata and sentence through weighted sum using the derived attention weights, given by
Where , are the learned co-attention feature vectors.
Neural Semantic Matching Network
We consider another deep learning method, NSMN, provided to deal with fever datasets before. This mechanism can perform semantic matching between two textual sequences. Therefore, we utilize this mechanism to learn another representation between Wikidata claim and sentence.
First, we compute an alignment matrix as . Then the model compute the relevant semantic component from the other sequence using the weighted sum:
Where softmax are the column-wise softmax function. The aligned representations are combined:
f is one affine layer and indicates element-wise multiplication. After that, via a recurrent network as
The two matching sequences are projected onto two compressed vectors by maxPooling. The vectors are mapped to the final output m by a function f.
We aim at predicting the consistency between Wikidata statement and sentence using the co-attention feature vectors , and NSMN feature vector m. Let f=[ ,] and m which are then fed into a multi-layer feedforward neural network, respectively. After that, concatenating all the outputs to finally predicts the label. We generate the binary prediction vector ,where indicate the predicted probabilities of labeling being 0 and 1, respectively. It can be derived through
Where is the matrix of learnable parameters, and is the bias term.
We conduct experiments on different properties to calculate their performances.
Metrics & Settings
The evaluation metrics include Accuracy, Precision, Recall, and F1. In this research, we are more focused on the performance of the inconsistent data, the Recall can reflect the results. Besides, it’s also essential to identify the highest probability inconsistent data. Hence, by sorting our testing data by predicted probability to see whether the higher predicted probability Wikidata and sentence pairs tend to be inconsistent. We consider it’s a ranking problem. Precision@50 is employed. We randomly choose 60% data for training, 20% for validation, and 20% for testing.
The main results are shown in Table3 to Table7. We can clearly find that the Co-attention+NSMN model can classify better than other methods across five properties. The results also imply five insights:
- The Co-attention+NSMN model performs better than other models in each property, and can get more balance prediction. For the two different ways of matching, NSMN tends to utilize different formula to calculate the relationship between Wikidata and sentence, and Co-attention tends to utilize more learning weights to automatically learn the influence between the pairs. This exhibits that although the NSMN model and co-attention model can get not bad performance and have their own advantages, utilizing two different matching methods together can get more complete information and can significantly improve the performance.
- The performance of the Precision@50 can get about over 70% in Co-attention+NSMN model which is better than other models. This exhibits that when the predicted probability is really high, the Wikidata and sentence pair has high probability to be predicted to inconsistent correctly.
- The performance of the Recall can get about over 95% in co-attention model, but the accuracy only can get about 60%. This exhibits that the co-attention model is really good at classifying the inconsistent pairs, but it has some difficulties in classifying the consistent pairs. However, NSMN is more balance in both of Recall and Precision, this exhibits that NSMN is better at classifying the consistent pairs than co-attention model. To sum up, combining the information learning by these two methods can perform the best in the Precision@50 and accuracy.
- The property “Country of citizenship” and “Occupation” improve the most in precision@50 and accuracy when using the Co-attention+NSMN model. Moreover, all the property can improve significantly.
- This current method solve the problems that only rely on the string matching. This method can capture some Wikidata and sentence pairs which are not complete matching but the meanings are consistent.
|Bag of Words+random forest||71.33||80.17||64.25||57.74||58|
|Bag of Words+random forest||72.22||74.09||70.43||64.60||54|
|Bag of Words+random forest||72.58||70.36||74.92||64.03||64|
|Bag of Words+random forest||68.44||70.51||66.47||60.75||64|
|Bag of Words+random forest||71.14||77.60||65.68||59.81||72|
For here, we also show some predicted examples to more clearly understand our prediction. First, we present some pairs which are truly the inconsistent pairs and also be predicted to inconsistent:
- Sentence: Early life and career Lin was born in Houguan (modern Fuzhou, Fujian Province) towards the end of the Qianlong Emperor's reign.
- Wikidata: place of birth Atlanta
- Sentence: In 2004, Dohnányi returned to Hamburg, Germany where he maintained a residence for many years, to become chief conductor of the NDR Symphony Orchestra.'
- Wikidata: occupation filibuster
And then we present some pairs which are truly the consistent pairs and also be predicted to consistent:
- Sentence: Vaz Tê has previously played for English clubs Bolton Wanderers, Hull City, Barnsley and West Ham United, Greek club Panionios,Scottish Premier League club Hibernian and Turkish side Akhisar Belediyespor
- Wikidata: member of sports team West Ham United F.C.
- Sentence: She studied contemporary dance at Simon Fraser University and a earned Master's degree in Fine Arts specializing in Creative Writing from the University of British Columbia.
- Wikidata: educated at University of British Columbia
- First, although we provide an initial idea about labeling the consistency between Wikidata statement and sentence without manual labeling, and most of the pairs can be labeled correctly, this labeling method still is not a high quality labeling datasets. It exists some problems that some sentences are mentioned the values of property, but the sentences are not talking about that property. Therefore, improving the way of labeling the consistency is one difficulty.
- Second, Even though the Co-attention+NSMN model truly can better detect the inconsistent and consistent pairs, the performance still can be improved by utilizing other technical methods or consider other information.
- In this research, we provide a method to label the consistency between Wikidata and Wikipedia contents, and also propose a model to be able to predict whether WIkidata claim and sentence is consistent.
- Our method utilize two different matching models, NSMN and Co-attention, together to discover the consistency between Wikidata and Wikipage, and can get better performance and balance prediction than other past models.
- Because we also utilize the aliases words information to generate the datasets, our method can capture some Wikidata and sentence pairs which are not complete matching but the meanings are consistent.
- Even though our proposed method can better detect the consistency, the performance still can be improved. Moreover, the labeling way we provide still can be improved to become the higher-level quality datasets.