This article was written in 2013. It might or it might not be outdated. And it could be that the layout breaks. If that’s the case please let me know.

I’m looking for arguments against a JavaScript only web

All websites should work without JavaScript, that’s how I learned to create good web sites. This is of course a very solid idea. If your browser doesn’t understand, or receive, the JavaScript, it will still show the HTML, and everything just works. Until last week I had no doubts about this architecture. But this week I’ve been having conversations with a few people who had some very solid arguments for a very different architecture: The server spits out JSON instead of HTML, and the the whole application is created from this JSON, in the browser.

Benefits

From a developer’s perspective this is a very clever solution: the back-end is only concerned with the design of a clever, solid API, and the whole application, with its logic, is created only once, in the browser. There is a very clear separation of concerns here: if the application breaks, it’s either the API, or the application, and not one of the several layers that are involved.
The idea is that this kind of development can be much cheaper. And cheaper is a business thing. If a client has the choice between a good, cheap solution, and a good expensive solution that both look the same, the choice is easy.

Arguments against JavaScript only

Now, these are some very convincing arguments to some. If we want to keep developing web sites and apps with our ‘classic’ approach, we need some very convincing, and very solid arguments, from more than one prespective. We need arguments from a user’s perspective. For instance: how do both architectures affect usability, can people link to stuff, can they easily search for stuff, and can they share a URL with their friends? And we need arguments from a developer’s perspective, where we care about things like productivity, the ability to test our stuff and the performance of the site we’re making.

Help!

And to be honest, I’m having a hard time to come up with truly convincing arguments. Of course I know a few, but from a business perspective they sound rather marginal: there are probably less people without JavaScript than people with IE7 and IE6. Even cheap, low end Android devices will eventually render your application, it will just take some time (just like everything on those devices, right?). From an accessibility point of view a JavaScript only web is not really an issue anymore, if I understand correctly: screen readers read the screen, not the view-source.

I, we all, need more and better arguments. So please, if you know any, the comments are open. Right now, I’m not looking for arguments in favor of a JavaScript only approach, and I’m definitely not looking for trolls, so please, stay on topic and be nice.

UPDATE: You should definitely read the comments. There are some very interesting arguments in there.
UPDATE UPDATE: I wrote a summary of the arguments you can find in the comments. And I opened up comments over there for arguments in favour of JavaScript only development

Comments

  1. You already listed the most important argument IMHO: performance. Remember when Twitter started to use client-side JavaScript to render their HTML pages dynamically based on the data they loaded through their own HTTP API? Yeah, me too. It was terribly slow — not because of a flaw in their implementation, but rather because serving JavaScript to a browser only to have it generate a DOM is _always_ gonna be slower than simply sending raw HTML down the wire.

    Twitter eventually switched back to serving plain HTML responses that were then enhanced through JavaScript.

    To me, the fact that a tech giant like Twitter tried going JavaScript-only and found out the hard way that it didn’t work is the ultimate proof that it’s a bad idea, at least for the foreseeable future.

    When presented with a choice between sending HTML to the browser, or sending JavaScript to the browser that will dynamically generate similar HTML, it’s obvious which one will yield the best load-time experience.

    • Cyriel
    • #

    I’m thinking two (or actually three) approaches while building current-gen and next-generation sites:

    * Content driven
    * Hybrids
    * Task driven

    For purely content driven sites I don’t see why you should use JavaScript whatsoever, because it’s all supported in HTML and CSS. The tools are already here in the form of serverside frameworks and SSGs. IMHO you don’t want to do this with JavaScript, especially because you will run into major problems concerning SEO for example. Also note that the content and the way it’s being presented is quite static, it won’t change much.

    Task driven sites however, have a more temporal nature. It constantly changes form and shape because it’s content will change more often. Also these sites tend to be more feature driven. For these kind of sites JavaScript is an excellent choice due to the ‘appiness’ involved.

    Hybrids basically take the best of both worlds and mix them together in the right proportions. Webshops for example, or the site of a insurance company.

    Hope this is clear enough :)

    • Vasilis
    • #

    @Mathias: Hooray, thanks for your comment! While performance is a good argument, it’s also an argument many JavaScript-only evangelists use. They say that updating sections of a page/app is much faster.

    Twitter had some very serious problems with delivering their ‘first tweet’ in time, as they called it. This definitely had to do with the nature of things: fetching HTML with a script tag, and then fetching the content, and then finally rendering it is indeed slower by nature. But are you sure this specific implementation was of no influence on these terrible results?

    But it is a very solid argument, one that I’ve been using myself too, this week. I’d love to see some more data though.

    • Vasilis
    • #

    @Cyriel, thanks for your comment. I understand the three approaches you outline, but I have an issue with it: I think almost *all* sites are hybrids. So if somehow it’s OK to use a JavaScript only approach for some types of content I think we need a more realistic, and probably more complex decision tree than this one.

  2. @Vasilis: I’m not sure what you mean. How else can this type of “functionality” — rendering all the data received through an HTTP API on the client side — be implemented, if not through the “JavaScript-only” approach? Is there a better way to achieve the same thing (without simply using HTML + progressive enhancement through JavaScript)?

    • Vasilis
    • #

    @Mathias: what I mean is that part of the terrible results of Twitter could be in the way they implemented their solution: maybe their API calls were gigantic? Maybe there were other mistakes they made, apart from this one? While a client-side JavaScript-only solution will always be slower than a HTML + progressive enhancement one, the question is: how much slower? If it’s a few seconds, it’s very easy to convince the client that this is bad for business. But if it’s just a fraction of a second this is a harder argument to win.
    That’s why I want more data, and not just this single Twitter case.

  3. From a UX perspective, I would propose that no JavaScript is just as useful as content first. More complicated interactions do not always make the product better. The web without JavaScript is pretty solid. So adding layers to it is something that should be considered with extreme care. Just like creating the content before type, layout and paint, one could create experiences without animation, ajax calls and complicated back-end and front-end infrastructure. It is hard enough to get that right, so ensuring you do this first will ensure your complete experience gets better.

    Another argument is the predictability of future devices. I often hear arguments that we do not need responsive design (read: mobile optimization) because input is hard on touch. This could change with a single future innovation. Input might just be magically simple on a future device. The same goes for JavaScript. We might just see a piece of hardware that does not like JavaScript. Or doesn’t like color (like e-readers). Or CSS in general (RSS reader hardware?). Build not only for what you can predict, build for unpredictable events. Nature has shown us long enough that the most agile organisms survive. Not the heaviest.

    • Jasper
    • #

    I think you’re talking about 2 things here: Separating site presentation from content generation, and what tools to use for site presentation. The first concept I think we can all agree on is a good thing. That said, a server offering content in JSON form through an API to a JavaScript application does of course still involve several layers. You’ve just removed the burden of having to worry about most of them from the frontend developer to the backend developer; and instead of frontend and backend developers having to both work on specifying a page template and generating it on the server, they need to both work on an API. Still, the separation of concerns makes sense.

    As for the second concept, that of using JavaScript instead of HTML, is in my eyes poor use of tooling. HTML is for content markup, CSS for markup styling, JavaScript for custom functionality and advanced interaction. You can use & abuse tools beyond their purpose to avoid having to learn how to use a different tool properly, but people, browsers, devices, screen readers, fellow developers, servers and web crawlers expect the web to work a certain way and are optimized for that. Besides that, you can scale up your servers; you can’t upgrade your visitors’ device or browser. I’d like the core of my content to be presentable as soon as possible, and JavaScript is not the tool for that. Cyriel raises a good point that not every website is primarily about content though; I do think that matters.

    • Vasilis
    • #

    @Wes, thanks for your comment. I definitely agree with the first point, but that’s not really what this discussion is all about. It’s not about fancy shit, it’s about rendering the DOM with JavaScript, instead of rendering it on the server. Apart from the initial page load, which will always be slower like Mathias rightly said, there don’t have to be any differences in how things look.

    Your second argument is excellent. If you want to build something that lasts, make it as durable as possible. Depending on JavaScript sounds a bit like depending on Flash. Which sounded like a good thing back in the day, but turned out to be not compatible with the devices we use right now. Very good point Wes, Thanks!

    • Vasilis
    • #

    @Jasper, thanks for your comment. I don’t think these JavaScript frameworks ignore the web stack: they generate the DOM, but they don’t generate CSS, and extra functionality is still added with JavaScript. So the separation of structure from styling and behaviour still exists.

    Then you make the valid point that people and machines expect the web to behave in a certain way. But, this may be true right now, but this could also change. For instance, not all developers expect the web to behave like this, as we can see from the rise of these frameworks. And some crawlers are experimenting with crawling JavaScript dependent sites.

    I really love this quote: “you can scale up your servers; you can’t upgrade your visitors’ device or browser” and I will use it in every conversation on this topic from now on!

    • Matt
    • #

    I have worked in three large scale projects (several millon users) with javascript-only. Two of them failed on different levels:

    One, a social network, failed, because search engines couldn’t find any content. This might have changed since then. It also failed because of performance issues. IE6 obviously wasn’t fast enough for building and handling pages with a few thousands DOM nodes. This did change in modern browsers, but IE7 still is around.

    The second, an email client, failed because of performance issues. This was a large, complex application, and had three different performance issues:

    1. Page load. There was that joke: “What is web 2.0? – When the HTML part is 2kb and the Javascript part is 2mb.” Loading all Javascript files and the data took too much time, even with optimizations like pre-loading files on the (static) log-in page. Ten to twenty seconds from log-in to page load was normal – on a developer’s computer with fast internet access.

    2. Memory management. Since it was a single page app, memory leaks summed up over time. Of course we tried to reduce memory leaks whenever possible, but still the memory usage grew to several 100mb. (Maybe the GMail team is able to fix memory leaks with the help of the Chrome developers. Not all companies have that kind of help though.)

    3. Complexity. Since it was a single page app, all different parts had to be handled at once. Mail folders, editor, address book, calendar, file storage, all could be present at a time. Of course we only loaded those parts on demand, but once a part was loaded, things got out of hands because of the dependencies. Sending an email had impact on the editor (hide it), the mail folders (add mail to sent folder), the address book (update list of recently used addresses) and probably the calendar (if the mail was for a date). Updating many parts at once took too much time. Besides, in the end, changes at one part could lead to bugs in others without the slightest hint that this could happen.

    The second project was replaced by one based on Wicket. Now, from a frontend engineer’s perspective, this is far from ideal, but it reduced the complexity and most performance issues.

  4. Vasilis, but they do ignore the web stack: URLs suddenly cannot be dereferenced anymore by an application not having JavaScript support, HTTP status codes lose meaning if it is the JavaScript that finds that a resource exists or not – every page is HTTP 200.

    Than there is the issue of LANGSEC: Turing complete input languages – yes, also template languages – create a halting problem of network insecurity and make filtering (AdBlock etc.) next to impossible. Have you read about the Principle of Least Power?

    In the end this is about accessibility: Those who build JavaScript-only pages measure the quality of their creations in what it does for the computationally privileged – I measure it in what it does in the worst case.

  5. All your users are no-js users before your js downloads.

    On performance:

    If the page is delivered as HTML & CSS, the only thing blocking rendering is your CSS (unless you have blocking scripts, in which case you’re doing it wrong). Once that’s there the browser can start painting content, even if the HTML hasn’t finished downloading.

    In fact, browsers will prefetch pages they think you’re going to load & scan them for required assets, such as CSS, JS and images.

    Compare that to a JS driven site. It must download the HTML (although small), CSS and JS, parse & execute the JS, then make an API call, then act on that API call, then perhaps the browser can paint some content. This is why old Twitter felt so slow. You can speed up bits of this by making the API faster, but there are only edge cases where JS-dependent sites can beat progressive-enhancement sites to first-paint.

    Yes, updating a small area of the page can be faster than refreshing a whole page. That’s part of progressive enhancement, do it where it makes sense. Just don’t make the user wait for that on first load.

    On development cost:

    I don’t buy this argument at all. In the progressive-enhancement world you’ve got models on the server, controllers on the server & html generation on the server and rendering on the client. You have small bits of application logic on the client for enhanced bits, if you need to update an area of the page just fetch new html for that page section from the server, keep template rendering server-side if possible.

    In JS-only land, you’ve got models on the server, controllers on the server & JSON generation on the server, models on the client, controllers on the client, html generation on the client & rendering on the client. This sounds like more to me. Also, the most code you have on the client the harder testing gets, as more of your logic is running on different engines.

    On compatibility:

    I’m a big fan of the BBC’s “Cut the mustard” approach, and have used similar things in the past, eg http://m.lanyrd.com. We supported loads of devices by avoiding JS enhancement on anything we didn’t think worthy of enhancement. Taking JS out of the equation for older devices made maintaining the site for them a breeze.

    • Cyriel
    • #

    @Vasilis: What I actually tried to say (and others also do I guess) is that with saying “all our output is generated through JavaScript in every use case” you are treading into very dangerous territory. Because when you say this you are basically defying the principle of progressive enhancement and the concept of universal access which are still very important, especially with future friendliness in mind (like Wes also mentioned).

    In fact I think you should need some *very* good reasons to bend these rules. And no, a developer wanting to build a API and a frontend based on Backbone or Knockout just because it’s fun, cool and interesting isn’t a very good reason. But to use JavaScript for making considerable improvements to the user interface and UX can be one which is justifiable for example.

    However it is not justifiable to just use it by default and show a blank screen without any good explanation *why* you decided to do this when the user had disabled JavaScript for whatever particular reason he or she has ( that Java thing in the name sounds rather scary for example, given Oracle’s security track record).

    This is what I mean with task versus content versus hybrids. Within a ‘content based site’ people consume content, I don’t see any reason to make this completely dependent on JavaScript; since when you use HTML and CSS you already are producing documents which are universally accessible the moment they leave a editor (or assembled through a server side framework or SSG).

    A ‘task based’ site however is a completely different beast. People come here to actually *do* stuff, providing lots and lots of information to create a social graph around themselves for example. Here performing the task as easy as possible matters most, and JavaScript helps to accomplish that. Since you are dealing with lots and lots of logic here I think it’s entirely feasible to think about using a framework such as Backbone to accomplish this.

    Hybrids here however are the main pitfall. You really have to think hard and long about using JavaScript for the right things here, because the lines between ‘content’ and ‘task’ are kind of blurry and it’s quite easy to screw up. I’ve seen this happen a few times in practice, a good example would be using JavaScript to display product pages because it makes switching variants easier. What this developer unfortunately didn’t consider was that Google doesn’t crawl output generated by JavaScript *facepalm*.

    Anyway, what I was trying to say is that while it might be perfectly OK from the perspective from a developer in any use case (because they just happen to love to use the latest and greatest tech over anything else) from a design perspective this decision should NOT be taken lightly, and while doing so things like context and purpose should be taken into account. because these matter much more than having and working with the coolest technology stack.

    • Cyriel
    • #

    @erlehmann: “HTTP status codes lose meaning if it is the JavaScript that finds that a resource exists or not – every page is HTTP 200.”

    Erhm, I don’t follow. Do you mean that external calls from JavaScript return 200 by default? That’s incorrect. Serverside code is perfectly capable of returning any HTTP code and body you want. and XHR is not only capable of returning content, but also statuscode and headers. I should know because I worked off and on with it since it’s implementation in IE5 in 1999 :p

  6. Cyriel, assume I have http://example.org/A and http://example.org/B – further assume all pages on http://example.org are rendered client side by JavaScript. Now a GET or HEAD to http://example.org/C will probably answer with a HTTP status code of 200 OK regardless of it existing or not.

    • Vasilis
    • #

    @Matt, thanks for your practical insights. Absolutely fantastic to read about the failures when most of the time the only stuff we read about are the amazing success stories. Very, very valuable, thanks!

    • Vasilis
    • #

    @erlehmann thank you for those links. I had heard of the principle of least power, and I completely agree with it. I hadn’t heard of LANGSEC before. Great stuff, thanks! (and I fixed the weird URL-replacing thing)

    • Vasilis
    • #

    @Jake, great explanation of the performance issues. It’s clear that there is no way an architecture like this can ever be faster. And the other performance gain that JavaScript-only-advocates talk about is not exclusive: reloading parts of the page works on every well developed site.

    I absolutely love your analysis of the development costs. Since I’m no software architect (at all), this is very valuable information.

    There’s a very good point you make somewhere within a sentence: it is harder to test a site like this because much more of its logic runs on different engines. There’s much more that could go wrong. If you use a framework for overcoming some of the browser quirks, you are not in charge of browser support anymore. The framework might decide that IE8 doesn’t need any support at all, while your stats might not justify this decision at all. The cutting the mustard strategy gives you complete control over support, and over what the exact definition of support is.

    Thanks for your time Jake!

  7. If you use a framework for overcoming some of the browser quirks, you are not in charge of browser support anymore.
    My standard response to people advocating needless complexity is “I used to be a programmer like you, but then I got a framework to the knee.”.

    • Bart
    • #

    I think we should also differentiate between a cold load and a hot load.

    The first time a JS app runs, it needs to load HTML, CSS, JS, then execute JS, then make an API call, wait for the result and render it in the browser. Obviously this is slower than static html.
    However, from then on, JS should theoretically be faster, since all subsequent requests consist of API calls + rendering JSON. That should be faster than reloading your whole page.

    It really boils down to your use case. Do your users just want to see some content and move on with their lives? Just use HTML. However, if they’re here to stay on your website for a longer time, and they need to get things done, using a JSON API may be more appropriate and actually improve the user experience.

    @erlehmann xhr.status should return an accurate HTTP status code. If your server sends a 404, your client *will* receive a 404. Note that servers often serve HTML error pages, which will be served with a 200 status code. However, if your backend developer knows what he’s doing, he shouldn’t be serving any HTML to a JSON api in the first place.

  8. Bart, XHR might return an accurate status code. But the initial page will always answer 200. This breaks – among others – link checking using HEAD requests.

  9. I love the idea of a central frontend API that serves all information required to render the website. However, I would still want it to be a website, not a webapp. Stuff get’s complicated quite quickly once you turn the webapp street. This is not necessarily a bad thing, but it is when there’s no need for your website to be a webapp.

    If the information contained in the initial server generated HTML is received from a central API which is also accessibly through XHR than doing sweet progressive enhancement trickery suddenly becomes a lot easier.

    • Michel
    • #

    Wow, what an excellent conversation. As the culprit who started messing with Vasilis’ mind, allow me to chime in.

    Besides the advantages that V. already listed there is also real business value in the API itself. It allows a single entry point for web, mobile and other system clients, where all business logic can be consolidated.

    I don’t think an widespread, open standard like JavaScript with massive support from all the big techs is comparable to Flash. So a future without JS is not really a concern for me.

    As for performance: I don’t see how you can say that one will never be faster than the other. Besides, Twitter may be doing more server side, but they are still updating these HTML snippets through JS. I’m not sure, but I’d be really surprised if twitter.com even worked without JS.

    You can build hugely complex things with blazing fast client side navigation, browsing, sorting, etc. with these frameworks, all in a modular, well defined style and a minimal amount of code (if you do it right). And all desktops and most mobile clients have plenty of performance to make things go really snappy. So besides the initial load (no small concern, but this can be optimized) I don’t feel performance suffers, on the contrary.

    Anyway, thanks for all your inputs, I’ll keep reading!

    • Bart
    • #

    That depends on what your actual page is. If you have a one-pager at index.html, then it’s only normal that a 200 response is always returned.

    The actual content is provided by the JSON API, and a request to api/document/5 could respond with any status code you want. If you want to check if document 5 exists, you don’t HEAD index.html, but you HEAD api/document/5.

    • Michel
    • #

    The development workflow improves considerably because all presentation changes are made in a separate codebase from the server. As long as the API remains stable, they can be on completely different iterations and deployment schemes. And deploying the app is trivial.

    • Vasilis
    • #

    @Michel, thanks for your feedback, but I explicitly asked for on-topic, friendly comments, in this case: arguments *against* JavaScript only development. Please stay on topic (the same applies for people talking about HTTP status codes). From now on I will remove *all* off-topic comments.

  10. erlehmann & Bart: That depends on your setup entirely. But in the case of a SPA (Single Page Application) that would initially trigger a 200 indeed. The calls generated by the application on the API would in turn be able to return a statuscode 404. That would be too late for any Search Engine Bot though.

    The first steps in solving this issue were made by Google back in 2009:
    http://googlewebmastercentral.blogspot.com.au/2009/10/proposal-for-making-ajax-crawlable.html
    (Source: http://backbonetutorials.com/seo-for-single-page-apps/)

    The performance? I think the only issue is the very first initial pageload. After that you get (offline) caching, manifest files, and much smaller transfers because of JSON only API calls.

    It will always be a project specific matter. A JavaScript only web will not exist any time soon. But there are more than enough use cases that prove JS only useful. Take for example SoundCloud.

    My opinion: Grow your service large enough and there will be a valid case for JavaScript only. (Using an API which serves multiple platforms) Grow it even bigger and there will be a valid case for JavaScript only co-existing with HTML pre-rendering. (Case in point: Facebook.)

  11. Vasilis, I believe the status code thing is very much related to the discussion. If there are different URLs for human and machine interaction (like Bart ipmlies) this leads to poor usability.

    • Vasilis
    • #

    @erlehmann, point taken, go on with your discussion (-:

    • Michel
    • #

    OK, I would say my 2 arguments against this way of coding are: increased initial load time and less suitable for content driven sites. But in reality it doesn’t have to be this black and white, and sites often have ‘traditional’ content sections and JSMVC app-y sections.

    If you want to hear a bit more on the subject, listen to the keynote and another presenter from the latest railsconf:

    http://www.confreaks.com/videos/2422-railsconf2013-opening-keynote-patterns-of-basecamp-s-application-architecture
    http://www.confreaks.com/videos/2447-railsconf2013-rails-vs-the-client-side

  12. Okay, next argument: I have a strong suspicion that the ontological error inherent in building a site-specific browser using JavaScript actually hinders HTTP caching. For one, a cache needs to be larger. Also, the cache is invalidated needlessly whenever the application logic changes – which, especially in a complex web app, happens quite often.

    Additionally, the humber of requests should be a concern. It invariably goes up with lots of JavaScript. I browse most of the time using a current browser and a UMTS connection – usually this translates to 300 to 700ms until I get a reply from the server. Shoddy developers who have only tested on localhost of course do not notice the slowdown introduced by eben 6 different scripts.