Jump to content

Research:Test External AI Models for Integration into the Wikimedia Ecosystem

From Meta, a Wikimedia project coordination wiki
Tracked in Phabricator:
Task T369281
Duration:  2024-07 – 2024-12

This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.

As part of our contributions to WMF's 2024-2025 Annual Plan, Research and collaborators are working on identify which AI and ML technologies are ready for WMF to start testing with (at the feature, product, ... levels), among the sea of models that are out there and continue to be made available.

Hypothesis Text

[edit]

Q1 Hypothesis

[edit]

If we gather use cases from product and feature engineering managers around the use of AI in Wikimedia services for readers and contributors, we can determine if we should test and evaluate existing AI models for integration into product features, and if yes, generate a list of candidate models to test.

Methods and Tasks

[edit]
  1. Define and prioritize existing use-cases for AI integration into products through interviews and surveys with Product lears and Product Managers T370134
  2. Define a set of criteria based on which we will identify existing models to test, and select candidate models to test T370135
  3. Define a protocol for external model evaluation
  4. Test models on WMF infrastructure

Timeline

[edit]

[Q1 24-25] Tasks 1 and 2 [Q2 24-25] Tasks 3 and 4

Results

[edit]

TODO: Add initial results for each task when ready

Selected Use-Cases

[edit]

Model Selection Criteria

[edit]

Selected Models

[edit]

Evaluation Protocol

[edit]

Model Test Results

[edit]

Resources

[edit]

References

[edit]