Jump to content

Wikimedia monthly activities meetings/Quarterly reviews/Reading and Community Tech, July 2016

From Meta, a Wikimedia project coordination wiki

Notes from the Quarterly Review meeting with the Wikimedia Foundation's Reading and Community Tech teams, July 12, 9:30 - 11:00 AM PT .

Please keep in mind that these minutes are mostly a rough paraphrase of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material

Slides (to be published at https://commons.wikimedia.org/wiki/Category:Wikimedia_Foundation_quarterly_reviews after the meeting):

Present (in the office): Jon, Toby, Danny, Anne Gomez, Jamal, Elias, Kaldari, Michelle Paulson, Joady, Tilman, Josh, Katherine, Jaime ; participating remotely: Maggie; Adam, Stephen N, Baha, Bryan Davis, Corey Floyd, Joaquin, Joe Walsh, Katie Horn, Michael H, Monte, Niharika, Nirzar, Trevor Parscal, Wes, Rita... (copied from Blue Jeans: Adam B, bd808, Bernd Sitzmann, Corey, Dmitry, Joaquin, Joe, Jon, Jon Katz, Katherine M, Katie Horn, Maggie Dennis, mhurd, michael h, Niharika, nirzar, Rita, Stephen, Trevor Parscal, Wes +10 guests (unnamed)

Note-takers: Michael H., Stephen N. (+ ?)

Intro

[edit]

Slide 1

[edit]

Intro from Toby

iOS: shipping great new features

Android: continue to ship regularly, make innovations our readers value

community tech: a bright spot, breaking ground in the way that we work with the community

Katherine: from my perspective, quarterly review is an educational opportunity to learn about what you're working on, pain points, share knowledge ... it should be educational opportunity for both audiences to understand what we're working on, and for teams to reflect in putting it together. have a conversation, share knowledge

Jon: we're going to go through reading team goals, go through each team's accomplishments, health metrics

Reading Strategy

[edit]

Slide 2

[edit]

Jon K: We're going to go through at a very high level what we do and how we think about. We want to engage and retain existing users in places where we're well known, and grow usage (new readers) in place where we're not.

Slide 3

[edit]

Slide 4

[edit]

JonK: This leads to three operational initatives:

1. Improve existing experience. iOS is spearheading new changes and improvements in this directions. All terms participate and make improvements. Check out the wrench icon.

2. Reaching new readers. Still in research phase. Web team will continue to be the driver here.

3. Identify new ways for people to participate with Wikipedia. Particularly in interactivity.

Based on foundational works: we don't want to build the same thing four times. Hence the Content Service.

Reading Web team

[edit]

Slide 6

[edit]

Reading Web: Adam: - First goal was reaching hover cards on multiple wikis. We didn't get as far as we hoped. So far haven't seen a negative impact on donations. Some concerns on the functioning of the feature itself. We were a little nervous as we went into the quarter. Well, was this feature good enough? As we studied the feature further, we realized we needed to work more on it and do more A/B tests in Q1 before rolling out. We want to see if we can replicate the findings of the Hungarian Wikipedia experiment. Any questions?

Michelle: What were the aspects of the feature that weren't meeting the quality bar?

Adam: It has some peculiarities about it (mostly UI, layout) like moving the cursor away from the hover card and it kind of sticks. It's just not fully polished yet. We do want to get a baseline from Hungarian Wikipedia and make sure users get a value from this.

Toby: There are some hurdles with ops around caching.

Katherine(?): Did we run the reader's satisfaction survey?

Adam: We haven't gotten that far yet.

Toby: I think part of this was really taking a step back and understanding what does it mean to release a feature on time. <breaks up> We adopted a more sort of options based approach where we do a series of smaller tests that validate or invalidate your beliefs around the product.

Katherine: seems like a good way of looking at it

Slide 7

[edit]

Adam: Hovercards work would improve the encyclopedia experience. On the new reader initiative, we want to decrease page load time and bandwidth consumption. Bigger impact in potential growth markets for WP and sister projects.

Q4: Got a start on rolling out lazy-loaded images to a handful of wikis, planned in Q1 to roll out ot all of mobile web. We've seen a dramatic savings in image bandwidth when this feature is in place. About 40% in image bandwidth conservation, which yields about 16% savings from the baseline in general.

Slide 8

[edit]

Adam (cont.): We saved a lot of bandwidth, but also wanted to speed up pageload times.

<Shows graph on 2G with lazy-load images enabled for Japanese mobile web>

You can see around July 6th that the lazy load images went into effect. Decreased page load time by about 5 seconds.

We want to thank the perf team for all the support they've been giving us.

Katherine: Looks like a remarkable group. Congratulations.

[In chat: appreciation of this from both Maggie & Wes]

Toby: You know the one thing that we were expecting <breaks up> Classic internet theory... It's becoming more clear that this might not be how people use Wikipedia. Faster is definitely better but I mean in terms of lift we're not necessarily seeing it for pageviews.

K: But I mean in terms of loss <breaks up> do we have people leaving us because of slow pages?

Toby: I think we have to dig into the data a little more. What YouTube found is people couldn't reach your site can now reach it. ... I think from my perspective, that the fact the lift hasn't been there at all, this is pretty characteristic...

K: This is by pageviews?

Toby: Yes, this [pageviews] is the metric we're using right now, we're going to have to dig in more.

Jon: In fairness to the method, the fact that we haven't been doing A/B tests yet. If we actually compare the same thing at the same.

Wes: I was just going to say, that because of the results you saw, are you planning to do more rollouts?

Adam: We're going to do some specific tests on some specific wikis. So we can see the data and communicate it more broadly. Generally, our plan is to rollout the feature over the quarter on the mobile web. So, in addition to these top level goals, we had a couple other notable things, and i should say we spent a lot of time on maintenance too. When we we met at the reading offsite, we reflected that we have a lot of features sitting in beta, and should we move some of these features into stable? And we thought yes. So, as an example. We rolled out wikidata descriptions to catalan wiki.

Ex.: wikidata descriptions. Have in android/ios, about time we got it onto mobile web. Currently it's on Catalan wiki, plan to roll out to others.

Jon: there are just some community concerns around using wikidata descriptions

Slide 9

[edit]

Adam: And we were looking at other places we could polish design. One place that seems worthwhile is search. (See bold red square on slide for old design.)


Slide 10

[edit]

Adam (cont.): It seems like making small changes such as this can actually have an impact on user behavior. based on initial data, it looks like users are actually engaging on search data quite a bit more, and... i think there's a lesson here that seemingly simple changes to user design can actually have an impact on users.

toby: design definitely matters. the other thing is that there is a bit of a lottery kind of mechanism with the changes that we make, you know, you'll make 20 changes and 19 of them will do nothing and then you'll have the 20th thing where [you'll see a big effect]. i think the project managment thing is to

Wes: Is the product design consistent across the other apps and something you are actually moving forward with?

Toby: ... i think it's unlikely we'll get to but you can take a look at nirzar's deck

iOS team

[edit]

Slide 11

[edit]

[iOS section]

Josh: This Q in ios we completed our quarterly goal before the quarter began.

We thought this goal would take a little bit longer to roll out all wikis than it ended up taking. We were done. We actually had a goal from Q3 that carried over. Which was to improve the performance of the app. That ended up being the bulk of the work this quarter. Currently in beta. Hopefully out to users next week.

Universal links = apple branding for deep linkings. Deep linking is a tech that lets mobile platforms treat apps like websites.

Obviously, universal links provide a ton more entries to get into the app. Literally MILLIONS.

K: are we seeing change in numbers?

J: We have seen a pretty large aggregate increase in retention. We don't know yet how much this is driving that. [?]

K: is this something you're going to be looking at?

J: Data is kind of an issue on the iOS side. Good questions


Slide 12

[edit]

Josh: So, I mentioned some of the performance work we've been doing had kind of big relationship to design?

Traditionally, the foundation has had kind of a distant relationship with Apple.

Lots of reasons for this. We have actually turned that aorund by getting contacts with Apple. They even reach out to us about bugs and features. Partly testament to team's effort and open mindedness to working with partners if they're true to our needs.

We're also having converations with Partnerships about in-app donations. Apple currently doesn't allow non-profit donations except with 30% cut. Apple says Wikipedia is special. We trust you. We can go through special policy change for Wikipedia. Big change in our relationship with them and our ability to work with them.

Toby: Apple does something and we just.

Katherine: This is huge. The cross collaboration across departments. The impact this could have. I'm really appreciative of the work this team has done. I'm really excited about this. I want to call it out as a major accomplishment.

Josh: One place that Apple really excels at compared to web or Android is accessibility. Making things available to blind and visually impaired users. Making the app accessible to those users is really important. We made an effort in previous versions to improve disabled access but got feedback that we weren't doing a great job at it.

Corey has a contact who is a blind iOS dev is going to review each version of the app to make sure it's accessible.

Link to TED talk.

Last two things are more about roadmapping and userfacing changes in mind. Geobased discovery. Taking the next iteration of a maps based interface for browsing Wikipedia. I also want to call out the discovery team here. We had some changes to the geo and they implemented them in a week and a half.

The other product mentioned here is prototyping content notifcations. New to Wikipedia. Pushing to user instead of user coming to us. For people who want to know when articles are trending, new content is available, ... we're going to be doing some measurements to make sure we're not pushing people away, which is a risk with push notifications.

From a historical perspective, this is a new thing from Wikipedia. We want to be very careful.

In terms of misses, we continue to have some real struggles with quantitative data tracking. When we redid the app in a previous quarter, we decided to invest in open source pywik.

EventLogging, our in-house system, is really built for the web. [...] Doable, but all very expensive (time consuming in iOS native app development) and not straightforward.

Piwik is open source to replace / augment event logging . Didn't scale very well. We had to turn it off. We haven't released in a couple months so it's been off. Will see next release very soon. Very much looking forward to working with analytics team to get that back up and running.

The other thing we had trouble with this quarter are stability issues with the app. Not very high crash rate but the timing and tpye of crash is really unfortunate. We get one star reviews for crashes and 4-5 stars for everything else. Our goal is to focus on stability and get rid of crashes. The Specialist Guild helps us test the app but we have millions of devices across the world and are looking at more ways to open up testing to community and automation. We need to try things on more devices.

Crash rates are not that high (though higher than we'd like them to be) but crashing is a very severely negative experience for users. Very strong negative effect on app store ratings and reviews. Need to test on more devices before shipping to users. Testing: TSG (our testing company/consultants) does good job at feature testing, but also need broad testing across many devices etc.

K: So for the store rating, once we get things to a more stable build, are we doing things to notify users that crashes have been fixed?

Josh: When we put up a new version we get a new star rating. We also put a notification up?

Toby: QA has been id'd as a concern across all of our engineering teams. We use TSG who are good at what they do There's certainly a gap around more exploratory tesitng that product needs to have a discussion around. We're seeing it here.

Android team

[edit]

Slide 13

[edit]

Dmitry: Android. We had some work that spilled over from the previous quarter which was reading lists. A few more weeks. But we got it done. They haven't had quite the effect on user retention that we were hoping. We might need more data to understand the true impact it's having. It's very much being used by the users that have discovered it. It was worthwhile.

Our goal for this quarter to release feeds in the app. To remind, it's the feature where we show a feed of reading suggestions. Some personalized (based on reading history) and others based on featured wiki content (trending, featured articles, picture of trhe day, and so on). The iOS app had completed their version of the feed and we really wanted to follow up on Android with the benefit of ios experience. We want to focus on rentention and reading and sharing. We came super close to releasing this quarter. At this point, we're only blocked on deploying backend portions of the feed to rpoduction. This goal is just pending deployment and it will be done.

Toby: The reading team really tries to put quality first. I'm sure if we would have cut corners we would have turned this red to green. But that's OK -- we need to produce top quality experiences.

Katherine: *nod*

D: New installs have been declinging slightly but we're hoping to boost that back up with Feeds. Overall health is good. Crash rate is very low. Excellent framworks to monitor crashes. code qaulity has been increases.

K: I keep asking questions. first is discoverability, those who find it like it. experiments around discoverability? the second is making sure that it's a really high quality product before it goes into production. how has service progressed?

D: we talk about service next slide. as far as reading lists, i didn't intend to imply that discoverability was an issue. they're realtively discoverable, we don't do that much to emphasize them partially because they won't be backed up to the user's account (they were going to be but now they can't be).

Jon: never going to be used by 90% of users, more of a power reader feature. (Dmitry agrees.)

K: ok

Slide 14

[edit]

D: I really wanted to highlight the work our team has been doing on backend side. Part of the work on the feeds has been working on the backend ... prior to this quarter, we had built this whole framework that allows us to build these content endpoints really easily. we were able to provide this content in a realy lightweight and structured format. this is available to anyone. not jhust android app. (Coincidentally) Free Basics (Facebook) was curious about whether we have a feed of featured content they could use, and we could point them to thi. no less than five new endpoints from random article (plus brains), featured article, in the news, trending, and so on. talking about that, i really wanted to acknowledge the service team for all of their guidance. i also wanted to call out the team practices group, they were really instrumental in helping us structure the work.

J: ios team building the feature first allowed us not to make the same mistakes twice.

D: the designs were half the work and having them in place was really great

Toby: Services are one of our pillars. ... one of the things we think about is we have three different platforms. the trick is you push stuff down into services. a lot of the stuff that josh and team have figured out, dmitry and team have pushed down into service. now for the negineers in the room, the fact that you can quickly and easily create an endpoint really easily is both a blessing and a curse: we can end up with a thicket of endpoints,

D: did htat answer your question?

K: yes

D: we're exceited to see who uses this

D: it is clear that reading list was a worthwhile feature to have built. there's am edian of five articles saved inreading lissts. there's almost 5000 users with 100 articles saved. almost 23000 with more than 10. One user has saved over 7,000 articles -- and that's not really an outlier.

Reading Infrastructure

[edit]

Slide 15

[edit]

Adam: continuing the theme with the service baskend. the reading team has been continuing work with the auth backend. we shipped auth with relatively speaking very few regressions. there were a coupl extenisons but it pretty much went off without a hitch

Theme: complexity is the enemy of security. The new AuthManager service reduces complexity.

we can never guarantee that there will never be vulnerabilities but we can make it easier for devs to work with. we look forward to seeing how people use this technology (2FA, third party logins, and so forth.)

tip of the hat to bryan davis who was here in the middle of the project and to brad and gergo who worked on it throughout the project

K: Why is this within the reading team's purview?

BD: we started the project before the reorg and we didn't drop it on hte floor, because it was really important to do and it just took a long time.

K: That's fair, and congrats. Good to see us join rest of the world on 2FA. Thanks for pushing it through.

(Bryan, etherpad only aside: there is no team in the Wikimedia Foundation currently that would on paper "own" a major revamp of shared MediaWiki core functionality like this. This is an unfortunate side effect of the Product centric reorg and a lack of treating MediaWiki itself as a product.)

Slide 16

[edit]

Community tech: Toby: you want to go into the appendix and then we give the mic to danny and ryan? Jon: i'll see what i can get to in ten minutes.

[see discussion on slides 25-27]

Community Tech

[edit]

Slide 17

[edit]

Danny: hi everybody! we are now six months into the great experiment of Community Tech. we did the community wishlist survey to get the top ten wishes of what people want. our core stakeholders are the current active contributors.

strategy slide:

This slide has challenges that we've found, and what we've learned. many of these are things that we do already as product and engineering people. we learned how to be more disciplined about some of these.

Interpretation:

some of the wishes were very vague and some very broad. trying to scope them down into small and acheivable goals. there's a danger of lots of feature creep; we've all got bodies buried in the gravyeard under the tombstoene of "we tried to do too much".

Planning:

the second part is verifying that it's what hte stakeholders want by showing them early iterations and wireframes. Wikimedians are super self-aware + details oriented, know what they want and have strong ideas about how to implement it.

our team is really smart and we're good at product, but we don't actually know better than the stakeholders.

Collaboration:

the third part is actually collaborating with the volunteers. A lesson we learned: don't crush volunteer developers' dreams! we should be working with them rather than sticking our stuff over them.

Slide 18

[edit]

Danny (cont.): Our goals this quarter to ship features or fixes.

Slide 19

[edit]

Kaldari: So the first one is migrate dead external links to archives.

we worked with cyberpowers678 voltuneer developer, who was working on dead link bot that took dead links that were already marked and replaced them with archive.org links. there are only so many links this works on because the links had to be identified manually.

Now Community Tech added support to automatically detect dead links. it's really difficult thing to identify whther links are dead or alive because of "soft 404s" (e.g. missing page redirecting to main page). there lots of different types of links. links to websites, pdfs, ftp sites, all of these have different ways to figure out whether the content was alive or dead. surprisingly there was no existing open source tool that did this well, so we had to write our own. bot will start going through wikipedia probably next week

Numerical sorting into categories:

another seemingly simple thing that turns out to be harder than it sounds.

all titles are sorted lexigraphically by value of character rather than by semantic meaning. E.g., 100 could come before 99 which doesn't make much sense to humans. Use UCA (https://en.wikipedia.org/wiki/Unicode_collation_algorithm). what we ended up doing was helping to create. there's actaully two different components to this. one of them is switching the wiki over to. the other. unforuntately, this required some pretty deep database changes because the index of how keys were sorted wasn't optimitzed . took weeks literally. we worked with bryan wolff and some other devs. we just need to at this point put the code into change that. this quarter instead of last quarter. english wikipediahas already agreed to accept this change.

Improving plagiarism detection tools:

There are already a bunch of tools for this kind of thing. in most caes plagarism is also a copyright violation. spreads like a virus. one of the things we did this quarter is to fix two of the plagarism detection bots that had been broken by yahoo api changes. we tried yandex and bing! and had problems mostly with TOS. We were using their APIs in ways that were unexpected and most people don't do. we actually ended up going with google with cost a lot more money but we got a $20,000 grant. everyone is really happy because it's actually way betetr. sheree was really instrumental in helping us set up that partnership.

Katherine: [question about the financial aspect]

Kaldari: we expect to last about a year before that money runs out. we have to ask for a new round of credits each year or budget for it in our budget

T: we did budget

Wes: Thanks, this has been a long effort, thanks for your diligence.

Kaldari: the yahoo api was a paid for service ... we also have more plagarism detection stuff coming up next quarter

Slide 20

[edit]

Danny: Our other big goal for this quarter was really to build new relationships with volunteers, folks from other hcapters, and teams, specifically the new collaboration with the newly-renamed Technical Collaboration team. That team is helping connect volunteer devs with volunteer-sized tasks from the wishlist survey. There's a link thats in the presentation to the survey results that has all the detail on what's happening right now.

one thing that we're going to be working on next year is building a larger backlog that volunteers can pick from for hackathons etc. people worked on a lot of items at the Jerusalem and Berlin hackathons, by esino lario it had kind of been picked over. so that's the thing we'll be working on this year.

Also working with WMDE's TCB team -- as we saw at the metrics meeting, working with them on the revision slider. at the berlin conference, watchlist expiry project. getting together with folks and moving stuff along

Slide 21

[edit]

Danny (cont.): cross-wiki watchlist. this was one of those moments where we really figured out. we put wireframes up super early. people really didn't like it. they really wanted to see the watchlist in a more compact form. Four out of the first 10 people who commented actually used the word compact consistently. because you put divs in it. you try to make the design more friendly and more open, but people really did not like it. we redid the wireframes and made them much closer to what they are today. the process of going back and verifying . as josh was saying, it's difficult because if people like stuff they don't talk as much. htere's less activity

Toby: the reading team has also done a few small experiments on getting very specific feedback. it's worked very well.

Danny: Definitely useful although hard at first. A great learning experience.

Slide 22

[edit]

Danny: Bryan has also been working with our team this quarter on Tool Labs support.

Bryan: I approached Toby I think in November with a crazy idea that I would do a different job: trying to provide better support for the volunteer community that uses Tool Labs to create tools and bots that help fill in gaps in the MediaWiki platform. Most of which help enable content curation in various wikis.

I've done a lot of things but they've been kind of scattershot, no big features. We did manage to publish kind of a basic vision document for a revamp of the workflows that tool devs have to go through. Make things easier to onboard new users into the system. We got a final analysis done on a tool labs user survey athat was kicked off by Yuvi and research dept in oct of 2015 they ran the survey it had been kind of partially analysed and published and we got a lot of graphs up and things. then i started working on vision roadmap. In security review for a tool that will allow tool maintainers to create git repositories to store stuff in git version control.

Toby: this is very important. one of the issues that we've run into that there are actually critical bots that have not hada license nor have been in source control. we can't support them even if we want to. as part of the roadmap, is bryan is making it easy to support a bot. there's no reason to do it any other way.

B: merlbot works on german wikipedia to do page archivation and curation. you know sweeping stuff up after so many days. it's broken. the author for good reasons isn't available to help and there's no published source code and no license. i've been working with tcb to see if we can correct that. a lot of the things i'm going to do in the next quarter is socially oriented to see if we can restart things/conversations that have kind of stalled out.

K: that's exciting. toby would tell me a little bit about your goals. you identify a need.

Toby: cool. thanks. we got about five minutes.

last note about staffing. we hired a volunteer developer who is super smart and great. we have one other open job req. we want to do the same kind of thing. emotionally and idealogically.

Also, on a pracical level, we're doing 10 things at the same time and onboarding is even more difficult when they're completely new to the project. that's what's going on in community tech.

Toby: i have one meta point to make about community tech. I've been thinking (along with Bryan D) a lot lately about scaling and how we scale as a foundation and as a movement. some of the work ct is doing with our commuity and chapters. is this something we can do in our day to day activities? bryan isn't going to write software, he's going to help people write software. very interesting and promising.

we have one or two minutes left. questions? thoughts?

Katherine: i'm also really excited aobut the work the ct is doing. there have been a lot of acronyms floating around. cross collaboriation. it's very aligned with the ehtos and colalboration and community and the movement. the inital feedback is very postive even if the mocks had lots of feedback. this is a great place to be and the first six months have been really positive.

Wes: Presentation at conferences has been a good thing as well. Lots of good feedback to me about both hackathons + wikimania.

Slide 23

[edit]

Slide 24

[edit]

New Readers

[edit]

Slide 25

[edit]

Anne: as jon mentioned, this is o. we've been over the last three months and also. talked about the trip to mexico last quarter. also. we conducted two more deep dives into nigeria and. i would love to share results with you but we actually have the workshops tomorrow and the next day so the timing is unforutunate. but we will be sharing that out to staff and putting it on meta.<break up>

Anne: Don't know what form results will take yet, but something [some presentation] will happen this quarter. we're debating whether we do one per project or . yeah, we're trying to figure that out.

outside of those deep dives, we've also been working on community involvement. we put together a list of community who participate in these kind of initiatives at iwkimania. i plan to talk to each individually to make sure their. hopefully we can get a little more involvement.

reserach is done in nigeria, mexico, ... we'll be presenting that this quarter. we don't have results yet. presentations are tomorrow.

There's a push within the org to move away from the term "Global South." We're not sure where that term even comes from. it's not clear how contries made it on to one list or another. we're working on a list to target countries with a high opportunity for us

K: i'm really excited to see where this goes. right now i think it's nascent, but.. it's an investment. if it pays off, 50% or 20%. This is a way to demonstrate to other teams in the org that we can and should be thinking more about cross organizational goals. i think the process has been real helpful model. regardless of whether

T: it's not like we're slicing a person and a half off on this thing.

A: we're trying to define what the future looks like. because this is so new, things have been evolving as we go.

K: People seem excited about the fact that we're listening first.

A: I hear that second or third-hand. people don't actually give a lot of feedback when asked. We get more feedback at Wikimania.

Reading Metrics Strategy

[edit]

Slide 26

[edit]

J: I'm just going to kind of talk about take homes on things that make the reading team effective.

Toby: Quim and some other folks from proudct and CL are ... The guided? What's my magic word? We are doing a deep dive into the relationshop into community liasons, product, and the community itself to id common issues that we can work on together. the next step is for the team to come together and agree that this is what we're working on. i think we will all benefit from figuring out what matters

The relationship has always been kind of ill-defined and this will help us figure out how to work together better.

Jon: the apps teams haven't historically had to care what people community wanted because community didn't care about apps. now apps are focused more on understanding data. We're trying to get the apps out there into the community and promote the innovative features we've been working on .

Jon: One other thing we're tryign to improve is how content is rendered on mobile. IMproving how content apperas on mobile. there's a tehcnical and programmatic component. the next is going to the community and saying this how you change the dial.

Toby: we talked to Amanda B from maggie's team. she had some great feedback.

Slide 27

[edit]

Jon: We mentioned some issues around data and analytics. Punchline: as we move tomobile, we've been trying to understand why pageviews are flat or down. We now knowlast year's drop was due to the china block and switch to https, but also that when we move to mobile we see shallower sessions. Half session length, half number of pageviews. one thing we learned this quarter that is really crucial. we wrer trying to understand why page view are down. this explains what we were seeing and also gives some insight into why we're seeing a decline.

K:

Jk: I'm not obsessive abou this like google or something, but we are about... making it easier for people to learn things offsite (?), where to focus efforts

Toby: i also think it's pretty straight. make the reading experience better on mobile. if the reading experience is better, people will stay longer. i think jon and i had a long going disagreement, i'm not saying you're wrong here, <jokes>, yes, mobile has had a big effect on how people consume our knowledge.