RSS is undead

RSS died. Whether you blame Feedburner, or Google Reader, or Digg Reader last month, or any number of other product failures over the years, the humble protocol has managed to keep on trudging along despite all evidence that it is dead, dead, dead.

Now, with Facebook’s scandal over Cambridge Analytica, there is a whole new wave of commentators calling for RSS to be resuscitated. Brian Barrett at Wired said a week ago that “… anyone weary of black-box algorithms controlling what you see online at least has a respite, one that’s been there all along but has often gone ignored. Tired of Twitter? Facebook fatigued? It’s time to head back to RSS.”

Let’s be clear: RSS isn’t coming back alive so much as it is officially entering its undead phase.

Don’t get me wrong, I love RSS. At its core, it is a beautiful manifestation of some of the most visionary principles of the internet, namely transparency and openness. The protocol really is simple and human-readable. It feels like how the internet was originally designed with static, full-text articles in HTML. Perhaps most importantly, it is decentralized, with no power structure trying to stuff other content in front of your face.

It’s wonderfully idealistic, but the reality of RSS is that it lacks the features required by nearly every actor in the modern content ecosystem, and I would strongly suspect that its return is not forthcoming.

Now, it is important before diving in here to separate out RSS the protocol from RSS readers, the software that interprets that protocol. While some of the challenges facing this technology are reader-centric and therefore fixable with better product design, many of these challenges are ultimately problems with the underlying protocol itself.

Let’s start with users. I, as a journalist, love having hundreds of RSS feeds organized in chronological order allowing me to see every single news story published in my areas of interest. This use case though is a minuscule fraction of all users, who aren’t paid to report on the news comprehensively. Instead, users want personalization and prioritization — they want a feed or stream that shows them the most important content first, since they are busy and lack the time to digest enormous sums of content.

To get a flavor of this, try subscribing to the published headlines RSS feed of a major newspaper like the Washington Post, which publishes roughly 1,200 stories a day. Seriously, try it. It’s an exhausting experience wading through articles from the style and food sections just to run into the latest update on troop movements in the Middle East.

Some sites try to get around this by offering an array of RSS feeds built around keywords. Yet, stories are almost always assigned more than one keyword, and keyword selection can vary tremendously in quality across sites. Now, I see duplicate stories and still manage to miss other stories I wanted to see.

Ultimately, all of media is prioritization — every site, every newspaper, every broadcast has editors involved in determining what is the hierarchy of information to be presented to users. Somehow, RSS (at least in its current incarnation) never understood that. This is both a failure of the readers themselves, but also of the protocol, which never forced publishers to provide signals on what was most and least important.

Another enormous challenge is discovery and curation. How exactly do you find good RSS feeds? Once you have found them, how do you group and prune them over time to maximize signal? Curation is one of the biggest on-boarding challenges of social networks like Twitter and Reddit, which has prevented both from reaching the stratospheric numbers of Facebook. The cold start problem with RSS is perhaps its greatest failing today, although could potentially be solved by better RSS reader software without protocol changes.

RSS’ true failings though are on the publisher side, with the most obvious issue being analytics. RSS doesn’t allow publishers to track user behavior. It’s nearly impossible to get a sense of how many RSS subscribers there are, due to the way that RSS readers cache feeds. No one knows how much time someone reads an article, or whether they opened an article at all. In this way, RSS shares a similar product design problem with podcasting, in that user behavior is essentially a black box.

For some users, that lack of analytics is a privacy boon. The reality though is that the modern internet content economy is built around advertising, and while I push for subscriptions all the time, such an economy still looks very distant. Analytics increases revenues from advertising, and that means it is critical for companies to have those trackers in place if they want a chance to make it in the competitive media environment.

RSS also offers very few opportunities for branding content effectively. Given that the brand equity for media today is so important, losing your logo, colors, and fonts on an article is an effective way to kill enterprise value. This issue isn’t unique to RSS — it has affected Google’s AMP project as well as Facebook Instant Articles. Brands want users to know that the brand wrote something, and they aren’t going to use technologies that strip out what they consider to be a business critical part of their user experience.

These are just some of the product issues with RSS, and together they ensure that the protocol will never reach the ubiquity required to supplant centralized tech corporations. So, what are we to do then if we want a path away from Facebook’s hegemony?

I think the solution is a set of improvements. RSS as a protocol needs to be expanded so that it can offer more data around prioritization as well as other signals critical to making the technology more effective at the reader layer. This isn’t just about updating the protocol, but also about updating all of the content management systems that publish an RSS feed to take advantage of those features.

That leads to the most significant challenge — solving RSS as business model. There needs to be some sort of a commerce layer around feeds, so that there is an incentive to improve and optimize the RSS experience. I would gladly pay money for an Amazon Prime-like subscription where I can get unlimited text-only feeds from a bunch of a major news sources at a reasonable price. It would also allow me to get my privacy back to boot.

Next, RSS readers need to get a lot smarter about marketing and on-boarding. They need to actively guide users to find where the best content is, and help them curate their feeds with algorithms (with some settings so that users like me can turn it off). These apps could be written in such a way that the feeds are built using local machine learning models, to maximize privacy.

Do I think such a solution will become ubiquitous? No, I don’t, and certainly not in the decentralized way that many would hope for. I don’t think users actually, truly care about privacy (Facebook has been stealing it for years — has that stopped its growth at all?) and they certainly aren’t news junkies either. But with the right business model in place, there could be enough users to make such a renewed approach to streams viable for companies, and that is ultimately the critical ingredient you need to have for a fresh news economy to surface and for RSS to come back to life.



Contributer : Social – TechCrunch
RSS is undead RSS is undead Reviewed by mimisabreena on Sunday, April 08, 2018 Rating: 5

No comments:

Sponsor

Powered by Blogger.