Grants:Project/Daimona Eaytoy/AbuseFilter overhaul/Final
Welcome to this project's final report! This report shares the outcomes, impact and learnings from the grantee's project.
Part 1: The Project
We did it! Everything proceeded quite smoothly, except for the "shared variables" part, which we eventually had to exclude from the project. Code quality has improved a lot, and the codebase is now much cleaner and easier to maintain. It uses modern features, it's covered by tests, and it's much less buggy. Yes, there's still room for improvements. But we've done a lot, and we're excited for the results we achieved. Hopefully you are, too!
- On the developer side, the main goal is to turn AbuseFilter into an easily-maintainable extension. We will reduce the need for maintenance, and make it easier for other contributors to work on the code. This will also deny the need for further grants.
- This goal was fully met. Using the metrics explained on the proposal page, we made sure to reach the target exactly as expected. The code is still not perfect, but it's much better than before, and much easier to work with. Detailed reports are available below.
- On the user side, the extension will have fewer malfunctionings. It will provide a new feature to help users write more powerful abuse filters.
- The code was made more resilient, and as such, fewer malfunctionings are expected. Aside from UBN tasks (which indicated some programming error, and were all promptly fixed), we didn't get any malfunctioning report during the project period. The "new feature" goal part was changed, but more bugs were fixed, as outlined below.
Important: The Wikimedia Foundation is no longer collecting Global Metrics for Project Grants. We are currently updating our pages to remove legacy references, but please ignore any that you encounter until we finish.
- In the first column of the table below, please copy and paste the measures you selected to help you evaluate your project's success (see the Project Impact section of your proposal). Please use one row for each measure. If you set a numeric target for the measure, please include the number.
- In the second column, describe your project's actual results. If you set a numeric target for the measure, please report numerically in this column. Otherwise, write a brief sentence summarizing your output or outcome for this measure.
- In the third column, you have the option to provide further explanation as needed. You may also add additional explanation below this table.
|Planned measure of success
(include numeric target, if applicable)
|Test coverage to 50%||Done, currently at 52.98%, as per the official source.||Like we planned, the most important backend classes were covered with tests, whereas the view part was left behind (as it's harder to test, and also less important). You can verify at the link above that important code is basically all yellow or green.|
|Namespaced code||Done Every class in the codebase is now namespaced.||It should be noted that there are two exceptions. The first one are some tests: we chose to omit the namespace for all tests that would need to be rewritten (e.g. simplified). The second one are two backwards-compatibility aliases, which account for other extensions that still use the non-namespaced variants of those classes. Also, the "top-level" AbuseFilter namespace still has a decent amount of classes. Some of those classes should be broken down first, then namespaced, but this was out of the project scope.|
|Emptying the old "AbuseFilter" class with its cyclic dependencies||Done, see graph in this section.||If you check, you might actually notice that the class is not empty. However, it contains two deprecated methods (which must stay because other extensions use them), a class constant and a utility method. The latter might be moved to a generic "Utils" class, but since the effect would be the same (and the class should stay anyway), we decided to leave it there. As can be seen in the image, all cyclic dependencies involving this class are gone, and the remaining ones (which predate our overhaul) only involve view classes, which again were not given much attention.|
|Implement shared variables||Not done, goal removed||As explained in a budget change request, this goal turned out not to be feasible. The main reasons that led us to this choice: 1) We determined that properly implementing this feature would have required a schema change for adding a database table; 2) Applying such a schema change would take quite a lot of time (and approval by DBAs); 3) We were both close to reaching the amount of budgeted hours, and didn't want to rush, or deliver an incomplete product. The hours that were allocated for this feature were spent in improving the architecture and fixing bugs (see next row).|
|Fix 10 important bugs||Done We ended up fixing a lot of bugs, and certainly more than 10.||While it is not easy to make a precise list (because not all tasks were tagged with the project tag, and some tasks were only resolved partially), here are two references: list of resolved bugs with the "AbuseFilter-overhaul" tag, list of all AbuseFilter tasks resolved by one of us during the time period. The second link gives us an estimate of 71 bugs. What follows is a selection of important bugs that were fixed:
Looking back over your whole project, what did you achieve? Tell us the story of your achievements, your results, your outcomes. Focus on inspiring moments, tough challenges, interesting anecdotes or anything that highlights the outcomes of your project. Imagine that you are sharing with a friend about the achievements that matter most to you in your project.
- This should not be a list of what you did. You will be asked to provide that later in the Methods and Activities section.
- Consider your original goals as you write your project's story, but don't let them limit you. Your project may have important outcomes you weren't expecting. Please focus on the impact that you believe matters most.
Long story short: it was fun. It's been an interesting experience, which made us face all kinds of challenges. One thing that was already mentioned in the midpoint report, but that is so important that it deserves a mention here, is that the more code we improved, the more things started to feel natural. Thinking of it as a puzzle, with every piece we put in place, the big picture became more and more visible. Once the structure of a specific portion of code was determined, it's been quite easy to add the remaining pieces, and honestly, this is a great source of satisfaction! We didn't face many tough challenges, except for the shared variables part. At the end, we determined that this feature could not be implemented, because it would have required quite a lot of resources (especially the schema change part). However, the disappointment was outweighed by the amount of bugs that we fixed, and how much code was improved, which sets the overall balance for the project to a net positive.
If you used surveys to evaluate the success of your project, please provide a link(s) in this section, then briefly summarize your survey results in your own words. Include three interesting outputs or outcomes that the survey revealed.
Is there another way you would prefer to communicate the actual results of your project, as you understand them? You can do that here! While both of these were already mentioned above, the best visualization of the outcome can probably be achieved by looking at hard metrics. The class dependency graph above and the coverage report are probably the most powerful tools for visualizing the results.
Methods and activities
Please provide a list of the main methods and activities through which you completed your project.
This is the same as what was explained in the midpoint report. We didn't have much metrics to set up, and we both had a local developing environment already set up.
Please provide links to all public, online documents and other artifacts that you created during the course of this project. Even if you have linked to them elsewhere in this report, this section serves as a centralized archive for everything you created during your project. Examples include: meeting notes, participant lists, photos or graphics uploaded to Wikimedia Commons, template messages sent to participants, wiki pages, social media (Facebook groups, Twitter accounts), datasets, surveys, questionnaires, code repositories... If possible, include a brief summary with each link.
- Coverage report
- Graph of the current class dependencies
- Source code as of March 12 2021
- List of patches on gerrit
- Project-specific dashboard on phabricator
- Generic dashboard on phabricator
The best thing about trying something new is that you learn from it. We want to follow in your footsteps and learn along with you, and we want to know that you took enough risks in your project to have learned something really interesting! Think about what recommendations you have for others who may follow in your footsteps, and use the below sections to describe what worked and what didn’t.
What worked well
What did you try that was successful and you'd recommend others do? To help spread successful strategies so that they can be of use to others in the movement, rather than writing lots of text here, we'd like you to share your finding in the form of a link to a learning pattern.
- Your learning pattern link goes here
- Learning patterns/When things go wrong, just tell people
- Learning patterns/Grant projects are not startups
What didn’t work
What did you try that you learned didn't work? What would you think about doing differently in the future? Please list these as short bullet points.
- Choosing a feature that turned out to be very hard to implement. This forced us to revisit our plans. Fortunately, we didn't have much time left, and we had already fixed more bugs than expected, which means that this didn't cause much disruption. As a future reminder, one should make sure to consider all possible nuances and details, before proposing a feature like this.
If you have additional recommendations or reflections that don’t fit into the above sections, please list them here.
Next steps and opportunities
Are there opportunities for future growth of this project, or new areas you have uncovered in the course of this grant that could be fruitful for more exploration (either by yourself, or others)? What ideas or suggestions do you have for future projects based on the work you’ve completed? Please list these as short bullet points.
Basically a single thing, hence not using bullet points: improving more portions of code! Some things were purposefully left behind, specifically the "view" component. There's a lot of space for improvement there, with several classes to decouple, new services to add, and in general, some separation of concerns is necessary. Also, as already mentioned, some of the classes that we touched might be improved more (in a "neverending way"), so there's plenty of opportunities for whoever wants to participate. There are also a lot of open tasks to be fixed, some of which would truly deserve some love.
Part 2: The Grant
Please copy and paste the completed table from your project finances page. Check that you’ve listed the actual expenditures compared with what was originally planned. If there are differences between the planned and actual use of funds, please use the column provided to explain them.
|Expense||Approved amount||Actual funds spent||Difference|
|Development time (bugfix, architecture refactoring, new features)||24000 USD||24000 USD||∅|
|Total||24000 USD||24000 USD||∅|
We basically finished on par, perhaps taking a few additional hours, but still very close to the allocated amount.
Do you have any unspent funds from the grant?
Please answer yes or no. If yes, list the amount you did not use and explain why.
If you have unspent funds, they must be returned to WMF. Please see the instructions for returning unspent funds and indicate here if this is still in progress, or if this is already completed:
Please answer yes or no. If no, include an explanation.
- No. As clarified by the grants admins a few months ago, no documentation is needed for our grant.
Confirmation of project status
Did you comply with the requirements specified by WMF in the grant agreement?
Please answer yes or no.
- In the opinion of the grantee, yes.
Is your project completed?
Please answer yes or no.
We’d love to hear any thoughts you have on what this project has meant to you, or how the experience of being a grantee has gone overall. Is there something that surprised you, or that you particularly enjoyed, or that you’ll do differently going forward as a result of the Project Grant experience? Please share it here!
Essentially what was expressed in the #Story section, and in the midpoint report. It's been an interesting experience, one that I would certainly enjoy to do again. A day may come when I'll want to begin a similar journey, but it is not this day. This day we relax, and enjoy the results of our work. --Daimona Eaytoy (talk) 16:35, 25 March 2021 (UTC)
It was my pleasure to part of this journey. I'd like to thank Daimona for giving me the opportunity to help out. It was fun at the beginning and it is satisfaction at the end. --Matěj Suchánek (talk) 16:56, 25 March 2021 (UTC)