Text

image

I thought I’d do a review of the most popular JavaScript frameworks (based on data from JSDB.io); please leave comments/feedback as this is all my personal opinion:

Angular - Yes, this is the most popular JavaScript framework right now. This MVC solution is definitely on the heavier side of development as it wraps HTML and JavaScript into fully re-usable components. I call it “Web Components Beta” because it has some very common features with Polymer: custom components, data bindings, imports, and templates are all in use in Angular (perhaps named differently there); however, the ‘shadow DOM’ feature, exclusive only to Web Components, is a lot more in line with the future of HTML development, and the heavily scrutinized and Google-backed Web Components implementations will win out eventually. Angular rejects imperative frameworks such as Backbone because they are less ‘hands-on’ the HTML, and because frameworks like Backbone encourage their technologies to be apart instead of using them all together. I disagree on this one, as technology soup is great for putting things together, but not for understanding them later.

Ember - Another MVC framework that tries to do all of the heavy lifting for you. Ember makes common MVC tools available to you, such as integrated handlebar templates. Ember has custom components, while its routing and models seem to be easy to use, much like the lightweight Backbone. From a quick glance, Ember seems to be right in between Angular and Backbone on the “heavily wired declarative HTML vs lightweight imperative JavaScript” spectrum of front-end MVC frameworks. Perhaps this is why they advertise it as easy to set up, yet powerful enough to get more advanced component reusability.

Select2 - This is a jQuery-based replacement for select boxes. It supports searching, remote data sets, and infinite scrolling of results. The power of this library comes from the ability to turn your user inputs into components which support tons of user manipulation (like search and multi-select) and make it easy to propagate/retrieve this data to/from server as well render/load results on the page. After having to do a lot of work for data-to-input-to-data implementations in the past, I am pretty excited about trying this library out. Oh yeah, and its inputs play nice with infinite-scroll :)

Backbone - This simple-to-learn imperative MVC framework is probably the easiest to learn and implement. As long as you know JavaScript, you can easily research how to throw some model and views together, add a router, and you are ready to go. I was able to drop this framework in and rewire one of my own projects in about 15 minutes. Its only hard dependency is underscore, another lightweight templating framework which works very nicely with Backbone. Combined with RequireJS, Backbone becomes an easy-to-implement, well organized MVC solution with a great separation of technologic concerns. Having the JavaScript separate from HTML means you have abstracted views which can be pulled apart and moved very easily, which gives developers some crucial flexibility in the later stages of a project. Of course, the downside is that the framework does not encourage reusable, easily testable components, but hey, after working with Java’s tapestry framework, I have seen “over-componentizing” lead to developers not understanding code/technology internals as well as having serious difficulties with refactoring without having system-wide side-effects.

Three - A lightweight 3D library for “dummies” :) This library provides HTML Canvas, <svg>, CSS3D, and WebGL renderers. It combines a lot of these great, modern rendering implementations into one easy-to-use framework and makes things like rendering a 3D cube simple. Check out all the truly awesome examples here: http://threejs.org/

Underscore - Known as “JavaScript’s utility belt”, it provides many great browser-compatible functions which are commonly used by a large number of JavaScript developers. The template function is especially useful for MVC frameworks using dynamic pieces of DOM to re-render elements on the page. Underscore functions are highly respected throughout the JavaScript community and are usually preferred to their normal ECMA5 counterparts. This lightweight framework is easy to use and often makes development cleaner and easier.

jQuery - Kind of amazed it is seventh on the list right now. I assume this is because everyone just takes it for granted, but it’s really #1 for me. I’m not going to leave a description for this one, because if you don’t already know what this does, then you need to hit the internet hard immediately!

React - Developed by Facebook (Instagram), its features include one-way reactive data-flow to simplify view-model interactions and virtual DOM for quick rendering changes. React developers encourage you to use this framework for even just the view part of your MVC solution; since React makes no assumptions about the rest of your technology stack, it’s easy to drop it into a small feature of your site. JSX, a syntax for turning JavaScript into more of a template (and back), is available to help with rendering. I would place React on the “heavily wired declarative HTML vs lightweight imperative JavaScript” spectrum right between Ember and Angular. Similar to Ember, it provides functionality for custom components, but goes ways beyond with optimizing DOM manipulation (which is its biggest advantage).

Modernizr - A JavaScript library which detects HTML5 and CSS3 features. After detection, it makes supported features known both through a JavaScript object and as classes appended to your <html> element. This makes it easy to write code conditionals based on features and to add specific CSS to regulate your site in a feature-responsive way. This is a widely popular library used on tons of sites.

Bower - A great package management solution which uses npm and Node.js under the hood. It exposes your package dependencies via an API to be consumed by your (and public) tech stacks. Bower is a widely used framework, which allows consumers to assemble packages and install components into their applications from around the web.


…also, since I mentioned this a couple times regarding currently popular MVC frameworks, here are the candidates laid out on the spectrum:

[lightweight imperative JavaScript] - Backbone - Ember - React - Angular - Web Components - [heavily wired declarative HTML]


Honorable mentions:
Video.js - easy to use for videos
Jquery UI - widgets built on top of jQuery
Less - programatic CSS
Code Mirror - in-browser code editor
typeahead - type-ahead

Special honorable mention:
RequireJS - I use this library in all of my projects because it makes all of your JS files/components easy to load and manage just like classes are in good back-end object-oriented languages. Just like that one commercial says, “I put that **** on everything!”

Text

image

Recently I got a chance to attend a great presentation by Jonathon Colman of Facebook, called “Integrated Content Strategy”. He shared some great insights on what content really is, and what every product strategist needs to know:

First of all content strategy is not all about copywriting (which is writing copy for the purpose of advertising or marketing). A copywriter’s copy is meant to persuade someone to buy a product, or influence their beliefs; however, the goal of great content strategy is creating meaningful and unambiguous experiences. But the question remains: How do we talk about what we do? The answer is: In order to successfully plan creation, publication, and governance of content, we must first understand the content’s identity.

Content is really the entire experience, not just words, fonts, code, design, or ads. Just because you have rich content, doesn’t mean you provide the right experience. Content includes the entire team or service provided. It has a huge focus on feelings, and the strategy of content is really the strategy of relating to people. Content involves people, and people are political. For this reason, most CMS systems do not provide a great experience and are in bad need of content strategy.

Quite frankly, the core principle of content strategy is empathy. You have to understand people. Empathy is the antidote to politics, and it comes in many flavors. The voice and tone with which you serve content is very important. You need to be in tune with “feelings” from customers whenever possible. One way to master empathy in content strategy is to understand what people really value, and what they believe they are measured by. In essence, if you figure out what people want, you’ll be golden. Of course, this is not easy to do and requires extensive familiarity with people’s habits over time.

For content strategy, concept maps are a useful way to help organize your designs. This is because content strategists don’t just make “things”, they make systems which make “things”. If you’re doing it right, your customers should not be able to tell where content ends and where design begins. It’s also good to have consistent content templates. Having different views which essentially have the same functionality means tons of wasted time for your designers and developers. Consistency reduces your technical debt, which ultimately affects your budget. Strive to have standards for content that are governed and visible to everyone. This helps a lot when new content is added.

It’s often difficult to manage and audit your content inventory well. The hard part is trying to figure out what content do you and your customer both want out of the content which you actually have. It’s a tiny sweet spot, and choosing the wrong content can be political, and get you in trouble. A good idea is to always provide visual user experience metadata somewhere in your views. You can think of metadata as “a love note to the future”. You should invest in it when you can, because with strong metadata, you can build great APIs, and that’s very valuable.

Text

Here are my thoughts on an Angular presentation I saw today:

First of all, at a glance, Angular looks like it is a solution which is between what some currently use (require, backbone, underscore) and the up and coming Web Components standards. My opinion is that current Web Components implementations are already cleaner and easier to use/build than Angular; so you can think of Angular as “Web Components Lite”, or “Beta”, and it might be worth just leap-frogging it, and get straight into stuff like Polymer.

I think a huge difference between backbone and Angular is how “heavy” they are: backbone is very lightweight and its models, views, and router are completely abstracted away from the DOM. This means you can move around models and views without worrying about how they plug into the DOM; on the other hand a “heavier” framework like Angular is completely tied to the DOM and is used in a declarative way. This means components are chunks of interwoven HTML and JS and are rather hard to separate. But this could also mean the chunks themselves can be easily re-used if implemented correctly. Again, I see Angular as something that Web Components will replace in the future anyway.

It did seem that using Angular removes the need for requireJS (replaced with dependency injection), backbone (DOM data bindings, inline controllers, and DOM-level scoping), and underscore (template partials and HTML render manipulation methods like ‘ng-repeat’). These are all very close to Web Components stuff like Shadow DOM, templates, and data bindings), but the main difference is web components work closer with native browser implementations as opposed to Angular’s engine or whatever which constantly has to run through the site and recompile.

…it might be useful to use Angular, but it takes longer to learn…I say stick with Backbone until Web Components completely come around, then perhaps try hybrids of both :)

Text

                       image

If you haven’t heard about web components yet then take this opportunity to learn as much as you can about them, because they are ushering in a new age in web development. Led by the brightest minds at companies such as Google and Mozilla, web components are about to change the entire landscape of how you build your web sites. What we now call “web components” is a series of emerging W3C standards that allow developers to define custom HTML elements, and interact with them using the native DOM, as well as extend HTML and its functionality. They are based on specs for building UI components as custom HTML elements, which deal with high level app concepts and low level DOM manipulation. Essentially, they turn your DOM into your main development platform. Think of web components as interchangeable building blocks of websites, which can be pre-built for you. They let you render components via a single element mention, and all of the component’s internals such as HTML, CSS, and JavaScript is taken care of for you. This gives us a unified way to create new elements that encompass rich functionality and render as expected without the need for all of the extra libraries.

This is the new way to build HTML. Everything on the page can be a web component. You can make web components that do anything, even render 3D webGL code from a single DOM element. Their real power comes from the fact that all of their complexity is hidden. They take attributes and use them as params internally to ‘configure’ their behavior. Since they make HTML markup declarative, they are very easy to reuse. You can take any DOM segments, even combinations of components, and make new components with them. They are completely defined with simple nested tags, which makes it easy to compose elements together, even ones built by different libraries because the common language is the DOM. Overwhelming markup trees (referred to as ‘div soup’) can be replaced with new web component tags which also eliminate the need for tons of JavaScript. The new tags separate concerns cleanly, which helps make scalable applications. Currently, too many frameworks have their own ways of implementing new visual elements; web components, on the other hand, bring in standard component tags which only expose the relevant data (much like a select tag renders the entire select box for you without exposing the DOM details, or a video tag renders a video player without exposing the markup of the controls). Web components will help SEO by making more information available to crawlers via markup. Also a huge benefit of web components is that they extend web development from being just for programmers, to anyone who can use HTML.


                           image


Polyfills, also known as “shims that mimic a future API, providing fallback functionality to older browsers”, serve as a bridge to web components and have already been widely used for some time. Creating them, however, took a lot of work and they are still hard to use together. You can think of Polyfills as a layer over the current native browser elements. They bring you fallbacks and compatibility for new components across all browsers; however, their performance can be lacking. Polymer, a huge Google project, is a framework which was developed to serve as a layer on top of the existing polyfills and as a platform for new web components. It uses the latest web technologies to let you create custom HTML elements. Under the hood, Polymer uses a set of polyfills to help you create web components on all major browsers. It is simple to use, and allows us to create reusable components that work as true DOM elements while helping to minimize our reliance on JavaScript to do complex DOM manipulations and to render UI results. New Polymer elements are easy to create using templates, and existing elements include: animation, accordions, grid layout, ajax capabilities, menus, and tabs (it has over 100 of them). In fact, PlayStation4 used Polymer custom elements to build its UI. Polymer can be installed in separate modules, or even separate components (uses Bower).

“Introduction to Web Components” is a W3C spec which defines a set of standards for web components. It strives to go beyond what current CSS and JavaScript can provide for the DOM: Templates define chunks of markup which are inert but can be activated/used later. They use <content> tags wrapped in <template> tags to specify markup fragments. Custom Elements let you create new elements with new tag names and script interfaces. They use <element> tags and allow you to extend other HTML elements via prototypes (similar to JavaScript objects). Nicknamed ‘custom tags’, they can nest scripts and come with useful lifecycle callbacks. Imports define how to load web components from external files using the link tag. Finally, Shadow DOM, which encapsulates DOM trees for UI elements, is perhaps the most intriguing standard in the spec: You can connect shadow DOM to any element, and it will act like normal DOM, but it is actually not connected to its host element like normal DOM sub-trees are: During rendering, the shadow DOM is rendered instead of the real child nodes of the DOM element it is attached to. Furthermore, shadow DOM trees use <content> tags to match which specific children of the original DOM get rendered, and insertion points to specify where. Possibly the best feature of shadow DOM trees is that they are separated from the original DOM by a boundary which keeps CSS and JavaScript from bleeding through it. This effectively creates a scope for the shadow DOM which gives its authors a lot of control over how content inside interacts with the surrounding DOM. It allows us to encapsulate everything just like iFrames do, but in a much more controlled and cleaner manner. Data bindings between web components help them talk to each other (using mustache syntax); and allow non-visual components to serve as data processors that talk to visual components and share data with them. If you want to get started creating web components using these specs, there are boilerplate projects on GitHub to help you with this.

Besides Polymer, other popular web component platforms include X-Tag and Bosonic.  X-Tag is a small library made by Mozilla that brings web components’ custom element capabilities to all modern browsers. It lets you easily create elements to encapsulate common behavior or use existing custom elements to quickly get the behavior you’re looking for. X-Tag is actually built on top of Polymer polyfills, and includes some cool built components like Panel (mimics iFrame), Modal, and Map (Leaflet). Bosonic follows the web components spec closely and includes lots of components: collapsible, sortable, datepicker, dropdown, datalist, tooltip, accordion, draggable, toggle-button, tabs, autocomplete, resizer, dialog, selectable, and flash message.


                                image


There are tons of cool examples of web components in action today. One such example is the combination of a Reddit element (which grabs data) and an AJAX component, which together easily read from a Reddit site using AJAX and post data back into your DOM, all without JavaScript! You can take a google map element which someone has made, add a marker elements which someone else has made, and the component know how to talk to each other and rendered together harmoniously. Some have made local storage elements, which store data in tags. I saw an amazing designer interface which lets you piece custom elements together visually in a sandbox, with configurable data bindings, all used without even looking at the markup. component.kitchen is a gallery website with tons of component examples. customelements.io is a website with a huge list of custom web components, even sorted by popularity. Finally, webcomponents.org is a simple, neutral site/community devoted to encouraging good best practices for web components.

Web Components are not fully supported on Safari and Internet Explorer yet; however with Polymer, web components are supported in the two latest versions of all modern browsers except for IE8, IE9, and Android. Chrome Canary is currently the best browser for use with web components.

Text

Last week I got to see a presentation by Ilya Grigorik based on his book, High Performance Browser Networking. Ilya Grigorik is a web performance engineer and developer advocate on the Make The Web Fast team at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices. Although the networking side of web technology is extremely fascinating to me, my knowledge of it has been limited to Network Essentials classes from college coupled with sporadic wikipedia lookups. Ilya changed a lot of this for me with this presentation, and I am truly amazed at the technological achievements in networking that are coming down the global web pipeline this year.

There has been a lot of changes in recent years that have made sending and receiving data faster, especially on the client side; however, 70% of the request lifecycle is still spent in the main bottleneck: the network. To even get to the application, data has to transform through network protocols such as HTTP, TLS, TCP, and IP;  and be transmitted through mediums such as cable, radio, or wi-fi. For a mobile phone, the average data return trip time (phone to radio network to core network to public internet to server and back) is about 100 milliseconds. In the US it’s almost twice as fast. The performance of such a request is based on both bandwidth and latency, but latency is the real issue. Even if it could travel at the speed of light, an HTTP request is still slowed down at the “connection points” by DNS lookups, socket connects, and content downloads.
image
Meanwhile, user patience is on the low end, with any delay over a tenth of second considered “sluggish”, while a whole second will cause a mental context switch. ISP carriers love to advertise bandwidth as “speed’, but the real measure of network speed is latency. Internet packets can arrive in San Francisco from New York City in 21 milliseconds. To give you an idea of how already fast that is, the same trip would take 14 milliseconds at the speed of light (it takes 133.7 milliseconds for light to go completely around the world). So internet packets are traveling two-thirds of the speed of light thanks to optical fiber cable and advances on making it even faster are already in the works. Unfortunately, the latency from the ISP to your router is another 18 milliseconds.

Furthermore, every TCP connection begins with a handshake and no data can be sent until this handshake is complete. TCP Slow-Start is a technique used on the server response to limit how many segments of data are sent back. It starts with a limit of four segments, which doubles on every subsequent response. This means every round trip can’t send more data segments than what was sent the trip before, which is done as a way to control data traffic so that the “pipes aren’t stuffed”. While this technique is essential in trafficking internet data, it is a huge problem for latency because it is enforced on every new connection. Fortunately HTTP Keep-Alive keeps the last congestion window rate up even after a long pause, and the number of starting segments has been upgraded from 4 to 10. Still, the average web page has 100 resources fetched from 11 different servers. Web Sockets are useful here because they use the same underlying TCP route and rules for every message after opening a connection.
image
Finally, the biggest thing happening in networking this year is HTTP 2.0. In fact, SPDY, the first draft of HTTP 2.0 is already implemented on all modern browsers. Current HTTP 1.0 techniques such as domain sharding, concatenating files, spriting images, and resource inlining will be improved upon or deemed unnecessary with HTTP 2.0 soon replacing SPDY. With HTTP 2.0, internet data will be multiplexed, prioritized, and streamed over a single TCP connection, allowing servers to send the most important data first, with the order of transmission of data only mattering within each individual stream. This will allow users to open as many streams as they need over the same connection. HTTP 2.0 uses a binary framing layer, flow control, server push, and header compression to reach it’s goal of cutting down internet wait time, and should be implemented in most modern browsers by the end of the year!

Text

image

With the likes of Facebook and Twitter using cards as the main way to display web content, the time to evolve how users experience the web has come again. Interactive web cards of today not only display information to the user, but also lure the user into playing with them. Users get caught up in a moment of exploration that has them focusing on an already familiar card template and playing with the accessories of its content. Forms of this include: adding comments which appear below an image, sharing or voting on the content with quick finger taps or clicks, watching a video or an interactive slideshow, and even playing a game right on the card. The familiarity with the card container will guide users in their exploratory experience and keep them engaged, but past that, what you put inside the card template is up to you. If you think about it, there is really no limit to what you can place in a card for users to play with. You just need to have the proper content management tools to support building such cards in a way that gives you the most room to be creative while keeping a familiar card experience for the user.


image

ThisMoment is currently developing an environment that will let you not only take all of your content and display it in card templates, but also allow you to use custom applications inside the cards themselves. Developers will be able to build their own custom card applications or download them from a store, and then drop them right into their content environment with ease. It will be up to the developer whether they use already built card templates or display content in an entirely different interactive way. This will allow you to put the power of card interaction right into your users’ hands, custom tailored for your brand’s desired experience, while adhering to our powerful interactive card environment. As a result, these interactive card applications should help drive more purchase intent from ‘interested’ to ‘buying’, especially when it comes to content associated with a product.

Content Cloud, which could already be available by the time you read this, is a powerful stepping stone to such an environment. It not only gives you industry-grade card content management features, but it could also pave the way for creating more advanced card-playing experiences in the future. Its innovative card content approach, coupled with a sturdy platform which took years to perfect, make it a great asset for all brands. The card content experience is not exclusively for the Facebooks and the Twitters anymore. Content Cloud makes it easy to gather all content relevant to your brand, and bring it to your users in the form of card content playlists. But this is only the beginning.


image

In the future, cards will be more engaging, more interactive, and more meaningful to the end user. Card interactions could include new ways for users to ‘handle’ them such as having the ability to flip cards visually and see accessory content on their backs. Card templates could come with custom card widgets, customizable by site admins on the fly. Live updating components on the cards such as streaming conversations, tickers, and notifications could really bring the cards to life. Even collecting the cards in users’ own ‘card decks’ could become a reality. Playing with cards will truly become an awesome user experience.

Text

image

What language exists on every single smart device these days? If you said JavaScript, then you are correct. Throw in HTML5 and CSS3 and you’ve got yourself an application that can pretty much do whatever you want…as long as there’s a browser environment there to make it happen. When it comes to app development, we live in a world where the most successful applications are the ones that are most supported across all environments and are built with the most support from all communities. Therefore, it only makes sense that the operating system of the future is one which is a web-based platform. Last week, I saw an inspiring presentation with some co-workers at Yelp HQ, put on by a guy named Nick Desaulniers (@LostOracle). He opened up the event by asking everyone these great questions:

Is the browser the first thing you launch on your desktop?
Can your browser do everything your desktop can?

To the first one I answered yes immediately, and to the second question…well I’m hoping to answer yes to that one as soon as possible. Under the covers, the presentation had a lot to do with Firefox OS, a Linux kernel-based open-source operating system for mobile devices; however, Nick didn’t exclusively make the presentation about Firefox OS (developed by non-profit Mozilla), and he didn’t have to. The idea which Firefox OS is based on, is novel enough to present on its own: A “complete” community-based alternative system for mobile devices, using open standards and approaches such as HTML5 applications, JavaScript, a robust privilege model, open web APIs to communicate directly with cellphone hardware, and application marketplace. This definitely got my wheels spinning. If PhoneGap was so successful in re-creating existing apps in web technologies and then porting them to every platform, then why couldn’t there be an entire operating system for this. Actually, if everyone switched to an operating system such as this, then everyone could code together in harmony. I’m not talking about “my code is better than yours” here. I’m talking about “let’s all speak the new web language, after all, it’s what everyone already knows”. Firefox OS’s project proposal was literally to ”pursue the goal of building a complete, standalone operating system for the open web in order to find the gaps that keep web developers from being able to build apps that are – in every way – the equals of native apps…”


image


Of course, a platform such as this requires a few things: new web APIs to expose device and OS capabilities such as telephone and camera, a privilege model to safely expose these to web pages, applications to prove these capabilities, and low-level code to boot on an Android-compatible device. But the standards-based open web has the potential to be a competitive alternative to the existing singe-vendor application development stacks offered by the dominant mobile operating systems. I mean, everything can run in JavaScript! Furthermore, existing vendor-specific apps can be repackaged in a web technology stack (again, think PhoneGap), and are then ready to be used on such an operating system. People have gotten so used to using web technologies to build internet applications, yet they often fail to notice that these technologies are powerful enough to do anything their desktop can (in theory). You can run JavaScript without an internet connection, well folks, same goes for making apps.

The problem with current mobile operating systems is that they are at odds with each other. A web-based platform which completely uses open standards without proprietary software involved, on the other hand,  is much more accessible to every developer. If the software stack is entirely HTML5, then there are already a large number of established developers. If we can bridge the gap between native frameworks and web applications using W3C standards, then we can enable developers to build applications which could run in any standards compliant browser without the need to rewrite them for each platform. But are the current web technologies up to the task to do this? JavaScript has gotten 100 times faster since 2006. Many existing applications were made on widely used high-perfomant languages such as C, but there are numerous, efficient conversion methods from C to JavaScript. In fact, JavaScript is the most targeted language today for converting applications. JavaScript vendors are now optimizing their JavaScript engines, a necessary step for the convergence of web technologies and native operating systems. Of course, we can’t forget the maliciousness that these inter-operating-system apps can cause. Which is why the browser operating systems need to have powerful permission models. The operating system says, “Sure we’ll let you in if you speak our web language, but that doesn’t mean you can do whatever you want buddy!” This is why browsers like Chrome have implemented “process isolation” patterns. HTML5 apps use application based permissions, which have worked in practice, but their standardization process is slow because of the numerous scenarios they entail.
image


In all of this hype, there are also the doubters. People have complained that web technologies just can’t deliver the same graphically seamless and error-proof user experience that native apps can. I have seen this firsthand, but I am not foolish enough to think that this will always be the case. With all of the current innovative, collaborative, and even competitive drive to make the best operating system experience possible out there,  I can’t wait to see the browser operating systems of the future. While FirefoxOS is just a first run at this, it is truly a driving force in the web versus native war, which as we all know, web is going to win, hands down :)

Text

image

When I think of content playlists, the first thing that comes to mind is iTunes. This application became the most popular playlist organization tool after iPods and iPhones became super popular. And why not? It allowed you to easily upload (burn) all of your favorite songs right off of any CD in your collection (you guys remember when you had CDs right?), and put them into one easy-to-manage library. Once inside this library, your media can be organized and analyzed in numerous ways, helping you figure out what you want to do with it. To me the most interesting part of all this was creating the “smart” and “dumb” playlists, which you would use to upload the media to devices, where it would be listened to by its target audience: you.

Having multiple playlists at your disposal allows you to quickly make important decisions as they are needed. Sometimes it’s an all out “best-of” day, sometimes we want to listen to the new stuff which just became available. Configuring “smart” playlists is easy in iTunes, because of the filtering system. Filters can help you narrow the results of your playlists and act as metadata rules which can be stacked on top of each other, as well as easily removed. It’s a system that can produce sophisticated playlists while using an approach which is easy to manage. Of course, you always have the option of going with the “dumb” playlist, in which you pick every piece of content yourself; however, this can get tedious if there are massive amounts of content to chose from.

The smart playlists in iTunes are also “live updating”, which is a very attractive feature for those looking for a more streamlined experience. By having the the application search and insert new content right into your playlists from the ever-changing library, the user starts to turn his playlists from merely being collections to becoming fully automated media stations. The idea here is that playlist selection goes from being tedious and predetermined (think YouTube) to automated and responsive (think Pandora). The best part about having “live updating” playlists is that you can taylor which content you actually want to keep or discard by experiencing and reacting to it (star ratings, up/down thumbs, votes, etc…).

image
Even when these smart playlists or stations start to repeat somewhat, the order of the content is still shuffled and randomized to keep interest. A big difference to note between iTunes smart lists and Pandora stations is that smart lists grab content based on a series of metadata rules, whereas stations tend to artificially create these filters for you from any one piece of content of your choosing, completely abstracting this step from the playlist administrator. YouTube has a different approach completely: They do not have any “smart” playlists or stations, but they do try to recommend you to experience (and possibly add) related content where applicable, and that’s pretty much everywhere you find content.

P.S. Check out this “XML Shareable Playlist Format”, which has been around since 2004:

<?xml version=”1.0” encoding=”UTF-8”?>
<playlist version=”1” xmlns=”http://xspf.org/ns/0/”>
  <trackList>
    <track>
      <title>Windows Path</title>
      <location>file:///C:/music/foo.mp3</location>
    </track>
    <track>
      <title>Linux Path</title>
      <location>file:///media/music/foo.mp3</location>
    </track>
    <track>
      <title>Relative Path</title>
      <location>music/foo.mp3</location>
    </track>
    <track>
      <title>External Example</title>
      <location>http://www.example.com/music/bar.ogg</location>
    </track>
  </trackList>
</playlist>

Text

I went to a LiveCode meet-up recently hosted by Diego Ferreiro, and boy am I glad I did. Diego’s presentation itself was a bit rushed, and he spoke with a thick accent, but he sure got the wheels in my head spinning about the role of the GPU in modern web browsers. Actually the most interesting part of the presentation was his code demonstration of how to solve Facebook’s “infinite scroll” problem, but I’ll come back to that…after the meet-up, I had some time to research browser rendering and found some awesome articles by Paul Irish, Paul Lewis, and Tom Wiltzius. Here is what I found out:

First let’s separate the phases of browser rendering. The following steps render the elements in the DOM into images on your screen:

1) Trigger - Elements are loaded into the DOM, or are modified in some way

2) Recalculate Styles - Styles are applied to elements (or re-calculated)

3) Layout - Elements are laid out geometrically, according to their positions on the screen

4) Paint Setup - The DOM is split up into render layers, which will be used to fill out the pixels for each element

5) Paint - Each layer is painted into a bitmap and is uploaded to the GPU as a texture by a software rasterizer

6) Composite Layers - The layers are composited together by the GPU and drawn to a final screen image

The important thing to note here is that these steps trigger each other in a waterfall way. Normally, we change CSS properties like width, height, margin, or top, causing styles to be recalculated, and elements to be laid out. This causes the layers to be repainted and re-uploaded to the GPU. However, we can skip some of these steps for faster rendering. In fact, the goal here is to completely skip every step but the last one, where layers are composited on the GPU. To do this we can use two CSS properties: transform and opacity. These particular operations do not need any of the steps before layer composition, because they simply reuse the render layers already loaded on the GPU. The GPU can recycle layers which have not been invalidated or “dirtied” and recompose them on the screen in a new frame. I saw the power of this first hand when Diego compared demos with and without the use of these GPU-accelerated properties. The difference was almost breathtaking.

The main problem with calling properties that trigger a re-layout or a re-paint is that everything on the layer of a modified element is uploaded to the GPU.  This is very expensive on mobile devices, where painting work takes longer; and the bandwidth between the CPU and GPU is limited, so texture uploads take a long time. Not all CSS properties trigger a re-layout. Some just trigger repainting, like box-shadow, border-radius, background, and outline. This still requires a reloading of the render layers to the GPU, although it’s faster than having to re-calculate styles and layout. Having to re-layout even one element can cause a chain of other neighboring elements to be re-laid out, sometimes going all the way up the DOM tree. Making elements absolutely positioned or fixed can help prevent this because they have “broken out” of the DOM tree.

Using the transform and opacity CSS properties to position, scale, rotate, and filter your elements not only limits the browser rendering steps but also forces the elements to use their own rendering layers. You can use translate3d or translateZ in your transform property to do this. Forcing layers to be created ensures both that the layer is painted and is ready-to-go as soon as the animation starts. Forcing layers to be used can cause slight delays in some animations, but this can be avoided by promoting the layers before they are rendered. You can also use JavaScript to apply these properties directly to the elements which gives you great control over starting, pausing, reversing, interrupting and canceling the animations; however, this can put more of a load on the browser’s main thread, whereas true CSS declarations are optimized by the browser ahead of time.

Here are two great articles on how you should add optimized CSS animations to your pages:
http://blog.alexmaccaw.com/css-transitions
http://www.kirupa.com/html5/all_about_css_animations.htm

The best utility I’ve seen to inspect all of this fun rendering is Chrome’s Web Tools. The timeline view will show you all of the rendering steps sectioned out by frames if you wish. This is very useful for studying how to optimize the visual rendering on your website. You can also “show composited layer borders” in the rendering settings (Chrome Canary), which highlights where layers are on-screen. A good way to see what’s getting painted is with the “show paint rects” in the settings, which shows animated elements flashing red.

Now, let’s go back to Diego’s infinite-scroll solution: The problem with most infinite-scroll implementations is that the DOM gets too big after a few loads. This slows down the scrolling itself because the entire DOM has to be recalculated, re-laid out, and re-rendered every time you scroll; and since the DOM only grows this is a losing performance battle. The solution here is actually to not add new DOM elements, and instead recycle them with “simulated” scrolling. By using GPU-accelerated transforms you can actually move the HTML chunk “surfaces” on your page from one end of the page to the other while they are out of the range of the viewport to simulate scrolling. One rendering layer is assigned to each “surface” for maximum rendering efficiency and the browser’s requestAnimationFrame function is utilized to make sure animations are interpolated with the right frequency. Finally, for peak performance he decoupled script events from the animations to get seamless interactions. In the end we were all looking at phones infinitely scroll with amazing speeds which you’d have to see to believe :)

…by the way, if you want to actually learn all of the complex inner workings on browser rendering check out this link:
http://www.chromium.org/developers/design-documents/gpu-accelerated-compositing-in-chrome

Text

Recently I looked into whether using iFrames to load lots of individual pieces of content on a page is a good idea:

iFrames separate pieces of content from each other so that they do not interfere with each other’s scripts and styles

…but…

Not using iFrames alows you to manage your content blocks by a single system on a page so that the content pieces can share resources such as scripts and styles for efficiency and development ease.

On the one hand iFrames seem like an ideal solution because they turn the content and everything it needs into a “packaged” product, which can then be dropped into anywhere and can be made to “just work”. If you think of the content as an independent entity, then this makes perfect sense. However, there are some disadvantages with this approach too. When similar pieces of content are all rendered together in a controlled environment, does it make sense to load the same scripts and styles again and again for each piece of content? AJAX and main page DOM manipulation should yield faster load times than using iFrames - the total DOM tree will be smaller, and some say loading iFrames is a tiny hit on performance (each iframe has to create a new render context). Also, iFrames tend to get in the way of development in many (and often unexpected) ways like making it harder to debug scripts inside of an iFrame.

Of course, with the non-iFrame approach, the shared javascript and CSS needs to be name-spaced. RequireJS can make managing script dependencies a cinch. For the CSS, you could have utility methods which could dynamically add CSS styles to the main page by the content as needed. You would just have to make sure that each content’s markup class names start with their own unique prefix, and all of the selectors in their stylesheets must only use these prefixed classnames (don’t use ids). CSS shared by each piece of content would already be loaded on the main page.

Sounds doable, but I actually like a hybrid solution here: I think a piece of content should consist of a “wrapper” and an “iFrame”. These could be bundled together neatly in a framework such as backbone, that would define everything each piece of content needs. The “wrapper” would consist of some markup which would be used as an interface between the shared scripts/styles on the page and the inner iFrame itself. The “wrapper” markup would have listeners (managed by the main page) that would be responsible for common and shared functionality like moving, sizing, loading, and interacting with other pieces of content. Hopefully as much reusable javascript as possible could be extracted out of the iFrames and into the main page for this kind of functionality. The wrapper could also hold variables that mimic and match the conditions inside the iFrame so that there is maximum visibility into the iFrames without actually having to go into them. The iFrame itself would then only contain content-specific scripts and styles (besides of course, the content), therefore helping overall page performance. With this approach you could still drop the iFrame into a separate environment with ease, perhaps one with different “wrappers” and “wrapper functionality”.

So in summation, without iFrames, the pieces of content are more closely tied to the page and are less “portable” and are more “tangled together” but the overall page structure is more sound and performant. With iFrames, a nice “separation” and “portability” is achieved but the overall page structure is more dispersed and less efficient. If you do use iFrames, I hope you consider the “wrapper” interface for better content piece management within the page. Think of the “wrapper” like the slick lamination covering cards in a playing-cards deck, which makes it easier to slide the cards on top of each other and shuffle :)