This article was written in 2013. It might or it might not be outdated. And it could be that the layout breaks. If that’s the case please let me know.

Arguments against a JavaScript only web (and room for counter arguments)

Yesterday I published a request for feedback about JavaScript only web sites. I had been having a few discussions in the past week, and some of the arguments in favour of such an architecture sounded quite convincing, especially from a business perspective, since this architecture would take much less time to develop. In this post I wrote yesterday, I explicitly asked for feedback, but only in the form of arguments against this type of architecture. Because I thought that’s what was needed: if we want to be able to make a good decision on what kind of architecture we need, we need to know all the arguments. If the comment section was open without any moderation, it would have turned into a silly yes-no-debate. I don’t want that.

So in this post, I’ll publish the most important arguments against JavaScript-only web sites, that were published yesterday. And below this post you will be able to publish your arguments in favour of this architecture. No discussions, only arguments in favour. Arguments against, over here please.

Argument number one: initial load time is much slower

A JavaScript only approach will result in a much slower initial load time. Both Mathias Bynens and Jake Archibald clearly explained that it is impossible to make such an app load its content faster. A good example is of course the New Twitter Fiasco. A performance disaster. It’s clear that the JavaScript only approach was for a large part responsible for the slow site, but other factors might have played a role too. But without a doubt, the slower the connection, the slower the app will be. Things might become unusable in a train in The Netherlands. Very important.

It’s often said that performance is better once the whole JavaScript application is loaded. This doesn’t really sound like a valid argument: the same effect can easily be achieved with progressive enhancement.

Argument number two: accessibility for people and robots

It is argued that a web site should be built with plain HTML. We want to be able to serve our content to different channels (like our own blog, and an RSS feed) and we want our content to be crawled by bots. This means we need plain HTML. Everybody seems to agree with that, and it makes an excellent business argument too: clients love SEO and they gladly pay for it.

Argument number three: Future friendliness

Wes argued that you put too much trust on the quality of your JavaScript code. This code might very well be incompatible with future devices, like we have seen with mouse-heavy-experiences that just didn’t work on touch devices. We’ve also seen it happening recently with a whole CMS that stopped working completely in a next version of Firefox because of a deprecated function. If you want your site to work for everybody five years from now, use HTML and don’t depend on JavaScript. This is not just about the architecture, it’s also about the quality of your code. Jasper said it nicely: you can scale up your servers; you can’t upgrade your visitors’ device or browser. Or downgrade your visitors’ stuff because your code stopped working.

Argument number four: crappy browsers and devices

Matt shared some experiences he had with projects that were built this way, and he pointed out that old browsers have a very hard time rendering thousands of DOM nodes. While old browsers like IE7 and IE8 are quickly vanishing from our stats, there is another trend: cheap mobile devices with Android 2.n installed on it are still being sold as new. Right now. And there is a new trend in computing: speed used to double every year or so, which is still the case, but prices for low end devices drop by half every year too. This means that not all of our new stuff gets faster and faster, people seem to be satisfied with slower hardware. It’s not only nerds with the newest gadgets that use the web, like erlehmann rightly points out.

Argument number five: complexity of the architecture

One of the main arguments for JavaScript only applications is the idea that the architecture is much simpler. But both Matt and Jake Archibald argue that this is not the case. According to Matt, complex applications become impossible to maintain, and Jake says it like this:

In the progressive-enhancement world you’ve got models on the server, controllers on the server & html generation on the server and rendering on the client. You have small bits of application logic on the client for enhanced bits, if you need to update an area of the page just fetch new html for that page section from the server, keep template rendering server-side if possible.

In JS-only land, you’ve got models on the server, controllers on the server & JSON generation on the server, models on the client, controllers on the client, html generation on the client & rendering on the client. This sounds like more to me. Also, the most code you have on the client the harder testing gets, as more of your logic is running on different engines.

Argument number six: don’t break the web

Erlehmann explained that all HTTP status codes lose their meaning. Every page is 200 OK, even if it doesn’t exist. This an issue that doesn’t exist in a normal browser environment, and it’s something you have to fix programmatically. Many commenters pointed out that it’s silly, hacky and unreliable to emulate native browser behaviour with JavaScript. So, by using these kinds of architectures, you create new, unforeseen but important issues that need to be solved.

Argument number seven: Browser support

Jake mentions it in a sentence, and I thought about it a bit more in a separate comment: if you use a client side framework to render the page for you, you lose control over what support means for different browsers: if the framework decides that IE8 does not need to be supported anymore, there’s no way to send it any content at all. There no such thing as gradual support in this architecture: it either works, or it doesn’t.

Conclusion?

There’s no conclusion yet. I am very happy with the excellent comments people gave so far. Some of the brightest people around took some time to share their visions on my question, and I’d like to thank everybody who commented so far. But I haven’t heard all arguments yet. So, in the comments section below this article you are more than welcome to place your arguments in favour of JavaScript only applications. So only arguments that support this type of architecture will be allowed here, and I will moderate strictly. Also, be friendly. If you have another valid argument against it, you are welcome to enlighten us right here. I don’t want discussions yet, I’m collecting arguments first, that’s why.

Comments

  1. I know developers who use JavaScript to externalize computation. With server-side computation, they would either have to allocate more resources or use their given resources more clever (e.g. you can get by easily without caching even if the API you are calling has a limit when the client is doing requests).

    Depending on the use case, this motivation can also lead to the development of user scripts, browser extensions or stand-alone apps; all of these have less usability issues than a web app (nobody expects a browser extension to have clean URLs).

  2. User tracking and general surveillance (primarily useful for advertising) is so much easier with JavaScript. For an impressive example, with JavaScript, one can do heatmaps of mouse movements.

    • Michel
    • #

    Ok here we go:

    Load time: it is usually only a little slower and can be optimized, which is exactly what twitter has done. CSS is also a development optimization that will “always” be slower than inline styling, but I doubt this had caused a similar debate amongst the front end community. There a loads and loads of complicated technical factors that have to be aligned for a web page to display, even if this is just HTML. My point is, to point at the loading of JS as the moment when the page needs to display is rather arbitrary and probably based in history and familiarity than anything else.

    In fact, the initial server response is much, much quicker, so the frame of the page shows up instantly. So the load time debate is not as black and white as posed here.

    As far as the longevity and future proof aspects, you need to focus on the JSON, not on the JavaScript/DOM. I’d argue a restful JSON API is more suitable for consumption in different devices and future scenarios. There is real business value in the API. And of course we are in complete control about backward browser support, so I don’t really get that argument.

    The response codes argument is simply not true, the JSON requests are just normal Http requests, and are only different from HTML reqs in the text format they produce.

    Finally complexity: these apps are not more complex, but some of the complexity moves to the client and becomes visible to the front-enders. Server communication becomes much more consistent, so this reduces complexity. Of course you can screw up with these tools, just like with any, but I don’t find that a compelling argument.

  3. First off, the term “Javascript Only” seems wrong. There’s as much back-end, HTML and CSS as ever. I think the common term is “single page apps”.

    When creating a web application (instead of documents), there really is no question to it. Your user experience is going to be terrible if you have to reload the page, parse the JS and rerender the whole thing every time a user performs an action. Your experience will be miles away from native desktop/mobile apps.

    If you would design any application from scratch, you would never come up with: “clear the whole screen and render it again every time the user does an action.” – this is just something we became used to on the web, because it once was invented for documents.

    That been said, I think the future of web apps is in the hybrid method, where the back-end serves up fully rendered HTML the first time, the JS app ‘hooks’ into that, and from then on everything is done bij API/Ajax calls. Of course, to keep single code-base, you will have to execute the JS on the server through NodeJS or so (I.E.https://github.com/angular/angular.js/issues/2104).

    Most of the arguments above are pretty valid. But it just isn’t that black-and-white. It’s been said before: for content-based sites, sure, do it old-fashioned, considering the above arguments. For apps, I would dream of going back to the old way.

    • Vasilis
    • #

    @Rory, thanks for your comment, but to be clear, I am looking for arguments for JavaScript only applications where the only thing a server sends is JSON, and everything is parsed on the client.

  4. Some server will have to serve an HTML doc on the first request. And there will be stylesheets, HTML templates (pre-compiled to JS functions or not) and other assets. The hybrid method I am suggesting executes the already existing (client) JS on the server before that first page. This could also allow caching if done right.

    • Michel
    • #

    @vasilis. This is never the case, the server always sends both. You are making this needlessly black and white I feel.

    • JoepvL
    • #

    Basically, the comments below the previous post already summed it up. If you are *presenting content*, and it should be reachable through direct urls, then just simply keep your logic on the server side or use some hybrid form like a pjax pattern (e.g. Github, Twitter, and Basecamp (http://37signals.com/svn/posts/3112-how-basecamp-next-got-to-be-so-damn-fast-without-using-much-client-side-ui) are all using variations of this). This way you can directly serve valid content on each request and conform to HTTP, not breaking the web in the process. At the same time you’re still taking some page load overhead out of the equation, and you have the opportunity to do some serious server side caching, helping you optimizing performance.
    If you are building a UI-driven application that has many changes going on in the DOM at many different times during its lifecycle on a page, it’s probably a good idea to use a Javascript MVC(-like) framework to organize your code.
    These days, this has become something to always consider when starting a project. I don’t really understand why you would only be “looking for arguments for JavaScript only applications where the only thing a server sends is JSON, and everything is parsed on the client.” (It’s not like you only allowed people to comment on that below your previous post.)
    Like Michel stated, this is too black-and-white, there are use cases for both but nobody is trying to say that any variation should be used for everything. I’m sorry if this is not as one-sided a comment as you were looking for, but I can’t in good conscience be any less nuanced about it.

    BTW, it looks to me like there might have been some miscommunication here; Vasilis says “[…] applications where the only thing a server sends is JSON, and everything is parsed on the client,” and in turn Michel says “[…] the server always sends both.” It is the web server that the API is served from that only sends JSON. The server that you host the front end code on sends the initial payload and any subsequent HTML (template) data. This can of course be the same server, both can even come from the same domain, but the API and the UI are still separate entities. This is not a part of this discussion, if it ever was a real discussion.

    @Michel the HTTP status codes are *not* a moot point; we are talking about direct links to content on the web. Twitter is a good example: when they switched to using #! in their urls (i.e. twitter.com/#!/user/status/status-id), the entry point at twitter.com would always give you a 200 OK since what comes after the # is never sent to the server (in fact, even now, navigating to twitter.com/#!/foo/bar will result in a 200, after which the page’s JS will redirect you to a 404 page for twitter.com/foo/bar). The point is, while all of this information – including status code – may be available from your REST API, nobody links directly to that (why would they? Of course, Twitter has its own API that it uses for disclosing its content to other applications, but it did make the actual twitter.com pages less “accessible”). Having content and other information (like meaningful status codes!) directly behind links is a fundamental part of the web, and using hashbang in a URL like this breaks that behaviour. I should mention that taking out the hashbang and using the History API instead does not solve this problem by itself. The problem is in routing every request through a single point and always returning the same response, subsequently putting the responsibility of your site’s routing on the client side. If the dynamic part of the URL represents application state (not really a “Uniform Resource Locator” anymore then, is it?), then that’s okay since an application isn’t strictly about HyperText Transfer anyway. Again, problems with this approach only arise when dealing with actual content that *is* meant to be disclosed in a way that conforms to the HTTP standard.

    • Heini
    • #

    In some cases it’s good to look back and see what happened in the history of the Web. Since the late 90’s we’ve all been trying to build web-application that felt like ‘real native’ applications. The only problem was: the browser wasn’t built for it, it was page-oriented in stead of state-oriented, the underlying protocols didn’t help us, etc.

    However, we’ve been very resourceful in getting stuff to work, although in some cases the implementations became quite monstrous. Tools came to the rescue. In the early days, Flash was our friend to built application UI’s inside a browser. Macromedia even turned it into a whole platform, Flex. Yes, a few brave ones tried to built complete websites in it but it turned out to be hopelessly complex. It failed and we abondoned it all.
    Microsoft saw this happenning and came out with its own solution: Silverlight. Big introduction; the web would change forever. No need to learn Javascript anymore; just use your C# skills and you could write both front- and back-end functionality…. Didn’t make it… Yes, Silverlight is still around but is mostly limited to video-players. Same holds for Google GWT. Another attempt, didn’t make it.

    That’s how I see this discussion as well: we’ve got another way of building websites/webapps. Why would this be any different than all previous attempts ? Yes, we’ve gone forward; we didn’t have JSON API’s in the past (or at least we didn’t call it that way). New frameworks and tooling came around.

    I’m very much in favor of the hybrid approach. Use the classic page-interface of the web (browsers, URL’s, SEO, etc.) and enhance it with application logic when needed.