Talk:Wikimedia Blog/Drafts/MoodBar Study

From Meta, a Wikimedia project coordination wiki

Comments[edit]

I think this has a lot of really interesting information, but it's a little too long and elaborate for a blog post (that's one big problem with drafting blog posts on the wiki – you're tempted to use level 3 headers and then all of a sudden you've got a documentation page, not a blog post). Does the average reader really need to know details like all users registered between 2011-12-14 00:00:00 UTC and 2012-05-22 23:59:59 UTC or that you analyzed the edit count of these users at 1, 2, 5, 10, and 30 days after they first clicked “Edit”? People who really want to know that stuff can go read the Meta pages; I think the average reader will be interested in 3 basic things: what is the MoodBar/FeedbackDashboard, what did you analyze, and what do the results mean?

So, practically speaking, I would suggest cutting down the general "history of the MoodBar" to a short paragraph, and combining the second and third section into one or two paragraphs on what you analyzed and what you found. You can probably take out the first list in "what we found" and just use the "in summary" part, because that's more easily digestible. Also, maybe just use one figure instead of two? (I like Fig. 2 more.)

Lastly, I would strongly recommend a lead section before general history or anything else that summarizes the entire post in a sentence or two. Any journalist-y person you talk to will tell you that you'll lose your entire readership if you don't tell them exactly what you're talking about and why it matters in the first couple of sentences of a blog post. This shouldn't be a research question, but more of a question and answer like: "We wanted to see what effect MoodBar has on new users, and so far we've found that users who posted feedback and received a helpful response were more likely to be productive, active contributors." Maryana (WMF) (talk) 17:37, 27 June 2012 (UTC)[reply]

I added the lead section and cut down the unnecessary bits and pieces. I am not sure about removing Fig. 1 (I also like Fig. 2 more) because it lets the reader appreciate the difference between MoodBar vs non-MoodBar users. Fig. 2, instead, contains only MoodBar users. Let me know what do you think of it. Junkie.dolphin (talk) 21:45, 28 June 2012 (UTC)[reply]
  • Yes, that's a great lead! :)
  • I don't think it's really a big deal if you want to include both figures, but my instinct is always less rather than more, keeping in mind people's blog-reading attention span :-P
  • One last thing: I don't think the "How you can help" section is all that necessary – since this is all still pretty experimental, it seems odd to actively herd people into responding to feedback. And this post is more about the experimental/research side of MoodBar, anyway. You might want to save that section for a later post in this series. Maryana (WMF) (talk) 22:50, 28 June 2012 (UTC)[reply]
I'm concerned about including the LOESS smoothed figure in a public posting because it appears to convey much more confidence than it actually does. Do you have a plot of means with error bars or some other type that the educated lay-reader may find more familiar? I'm also really surprised that the activity level of editors appears to go up across the first month. This also appears to contradict your analysis from last summer for low frequency editors (the most common type). --EpochFail (talk) 00:42, 4 July 2012 (UTC)[reply]
Do you mean MoodBar users or all users? The 30-days avg editcount difference for the reference group is negligible, but I'll let Giovanni comment on this. --DarTar (talk) 00:25, 5 July 2012 (UTC)[reply]
If you check the report we have the full version of Figure 1 (in the blog post we decided to put its rightmost panel only), which is what you are asking for, that is the conditional means by day and treatment -- only not broken down by mood type. Breaking down by mood type would be easy, but what would it tell you more than Figure 2? The means would be very close to the LOESS prediction anyway, only you don't get the nice approximation for the rest of the interval. I can try to produce a version of Figure 2 with means and errorbars by mood type superimposed to the LOESS to give you an idea, if you want. Regarding the lifecycle analysis from last summer, it does not seem to me that these findings contradict it. The curves in Figure 1 and 2 refer to the cumulative edit count, while the lifecycle plots refer to the daily edit count, so it is normal that the former only grow and the latter either increase or decrease. Or do you refer to the slopes of the LOESS estimates?? But they look slightly concave to me. Anyway if you see the orange bar in Figure 1 of the full report, you will see that in the first 30 days the average editcount for all active users (that is a subset of the people I took into account last summer) grows in the first month by (what I would say is) at most 2 edits. It seems consistent with a distribution of the daily edit count that is inflated with zeros (most people do not do any edit most of the days) and that has some large outliers (the very active users). Junkie.dolphin (talk) 08:22, 5 July 2012 (UTC)[reply]

Helpful or useful?[edit]

Can the two terms be merged for simplicity? --EpochFail (talk) 00:20, 4 July 2012 (UTC)[reply]

"Active editors"[edit]

It may just be my work in studying the decline talking, but I'm worried that the term "active editor" is confusing since it is used in WMF statistics to describe the number of editors who perform at least 5 edits per month. Further, it appears to me that the term "active editors" is inappropriate since many such users have never completed an edit during the observation period. Why not just refer to them as "no feedback" users? Is there something else that makes them different than users who leave feedback? --EpochFail (talk) 00:25, 4 July 2012 (UTC)[reply]