Have you ever wondered what it takes to generate a viral piece of content? And no, I don’t mean something which is medically contagious (https://www.youtube.com/watch?v=NizrG2oZYW0). I’m talking about that one video which gets played by everyone in the office (https://www.youtube.com/watch?v=kCSFMqkcqoM), or at least by that one hipster who knows every viral video ever created. Well, there isn’t exactly one formula which will get you there, but following my five step guide could help you get started!

#1 - Choose the right content

If you look at the various popular videos and images on the web, you’ll notice that some of the most famous pieces of content revolve around a few topics: kids, talent, pets, crazy, offensive, and of course, adult. Does this mean you’re stuck with making a video of a cute child/baby or cat/dog? (https://www.youtube.com/watch?v=ZlOsu870j8E) Of course not, you can also go with something inappropriate to the general audience. But let’s break down some of the basics: If it’s cute, then you have to take a picture or video of it immediately. Same goes for funny, but if you can get cute and funny together, then you might hit the jackpot. The easy part about generating cute content is that if you have a cute kid or pet, then cuteness will just always be there for content creation…just ask Maru (https://www.youtube.com/watch?v=JqTfk7Etr3c). Funny, on the other hand seems to be spontaneous (https://www.youtube.com/watch?v=TvXnhtzjhRE), and is usually turned into content by accident; so unless you’re a comedian or an actor, you might want to skip this one.

Quite frankly, the easiest and fastest way to generate content is by taking pictures of everything, all the time. You don’t really need to have a plan. The plan for most teenagers seems to be: “Is this cool? Do I look cool? Then I need to take a picture ASAP!”. Ah yes, the “selfie” is born (https://www.youtube.com/watch?v=aKkdm71X4e8). I think it goes without saying that the urge to look cool on the internet is a gigantic driving force of user-generated content. In fact, brands will look for this type of content over everything else because this content doesn’t look like it was “generated” according to some plan devised by the brand. Brands just want to show people as they are…or at least, as they are when they are trying to look as cool as possible. So yes, if you want to game the system, you might choose to generate content that looks like you being yourself, but is in fact you using or wearing clothes and products of large companies that happen to have extensive marketing departments. Don’t worry, I won’t tell anyone.

#2 - Choose the right device

Back in the day, if you wanted real content, you had to hire a portrait painter and sit for a while. Sharing content was limited to parties where you hung your piece of content on some wall and talked about it. Now you’ve got random painted nails visible to everyone in the world, with comments, and maybe even product solicitations (gallery.sephora.com/board/Nails). Any phone which is capable of taking an image can create content. As long as you can capture a moment in time with a device, then that device works for content generation. Any device which can record image, video, or sound will work. Most devices are optimized to make the creation of internet-friendly content easy. What changes between the devices is usually the quality of the capturing and storing technology, and how the device transfers the content to other places. I’m not sure how much this affects a piece of content trying to go viral, but it might affect brands’ decisions to use your content, since they love high-quality image and video content.

#3 - Choose where to upload


Now that you have your content, you need to make it publicly available, or in other words, put it on the internet. Social networks are usually a great start. The amount of sharing and re-sharing that goes on inside of these interactive networks is insane. Yes, you can just make a website for yourself these days and host your pictures and videos there…but seriously, who does that? You need to alert every single person you know about your fresh new content, and it has to make a lasting impression! Again, social networks are pretty much built for this nowadays. Go ahead and join Facebook, or Youtube, or Instagram, or Twitter, or Ello (https://ello.co/beta-public-profiles) or whatever…Myspace anyone? anyone?…and upload as much content as humanly possible. This is where it will be scraped, tagged, #hashtagged, mentioned, and pretty much hooked into the matrix of web content, ready to be found by anyone, and anything. That, or you can wait around for Google to index your website. Shouldn’t take too long…hang in there.

#4 - Choose promotion strategy

Ok, so you’ve created, captured, and uploaded your content to social media. What do you do now? Here’s one idea: Throw a gigantic party in the middle of nowhere, all centered around your content (http://blog.thismoment.com/ugc-marketing/). Oh, you don’t have millions and millions of dollars for this? Maybe you can blog about it? Tell some friends, write some emails, hire a promoter. Basically there are no shortcuts with promoting your content, unless you either have the advertising capita or your content is seriously, seriously viral. In which case, it will promote itself once it hits social media. Also make it a point to read and re-read this great article on promoting your conent by KISSMetrics: (https://blog.kissmetrics.com/17-advanced-methods/)

#5 - Go Viral


So you’ve promoted the heck out of your content and your potato is ready to go viral (https://www.youtube.com/watch?v=cbfnOtMfUmU). Ok…a viral potato kind of sounds gross, and unfortunately there is not much extra you can do at this point unless your content is actually viral-worthy. I could put tons of advice links found all over the internet on the subject (http://www.huffingtonpost.com/noah-kagan/why-content-goes-viral-wh_b_5492767.html), but the best advise is to probably not get carried away trying to force your content into something it’s not. Just have fun creating it, sharing it, tweeting it, tagging it, putting it on social media and content clouds, and the rest is up to the general public. Good luck and may your content always leave them wanting more!



I saw a presentation on Firebase recently by David East, and as a result, my mind is still blown (and yours should be too). Wait…is it true that this service can eliminate the need for the server side of your application? And is it also true that you don’t even need a database anymore, even if you have simultaneous multi-client transactions you need to process? Seriously, does this mean I don’t need to make an HTTP request to my server to authenticate my users, return live data from my DB, and write back to my server when data changes?

The answer to all of these is an astounding YES! but, but, but….how?

Firebase provides a real-time database and back-end as a service. The service provides application developers an API that allows application data to be synchronized across clients and stored on Firebase’s cloud. Firebase’s client libraries enable integration for Android, iOS, JavaScript, Java, Objective-C and Node. The database is also accessible through bindings for several JavaScript frameworks such as AngularJS, React, Ember, and Backbone (Backfire). On top of this, they also host your assets on the Fastly CDN, meaning that you can literarily build and deploy a new web application which uses real-time data across multiple clients in about as fast as it takes you to purchase auto insurance online.

Real-time data is the key here. Back in the day, when I was building iPhone apps, I realized how hard it was to make a live multiplayer video game. Aside from building the front-end (in this case, using Objective-C), you would need a server which your clients (users playing on your application) could all sync their data to, and retrieve data from. My idea was to create a server which people could just easily send messages to and from in a simple way, like using AJAX calls or other types of asynchronous requests. Of course, Firebase has provided all of the means to do this exceptionally well, and developers all over the world are going to be very glad they did.

With Firebase, you can now build an application which retrieves and stores data via a public REST API, using JSON. Under the hood, Firebase uses a NoSQL database to store and retrieve data in real-time. The real-time data is synced across clients retrieving it, so in some ways this is actually superior to most back-end solutions: Sure, you can use WebSockets to minimize connections from clients, but the messages going back and forth between client and server could still be out-of-sync; on the other hand, Firebase can provide all clients the same exact view of the the server state. I guess it makes sense that this service evolved out of a chat client with similar requirements.

Firebase also supports authentication (via Simple Login), and security (via Security Rules). With this you pretty much have everything you need to start building server-less web applications. Your users don’t care if you have your own server or database, all they care about is the resulting front-end experience. Being a devoted Backbone engineer, I’m happy to say that Firebase’s Backfire library provides custom collections which seamlessly integrate with the Firebase API. In the demonstration by David East, he quickly put together a backbone app, and deployed it to Firebase’s hosting services (which use certs), right from the command line. Once deployed, this application was live to the masses, and he could inspect all of the application’s data in easy-to-read foldable structures using Vulcan, a chrome plugin which inspects Firebase data. It’s so easy!

Now does this mean everyone needs to drop their servers and DBs and switch to Firebase? Can you replace the back-ends of popular sites like Twitter, Uber, Google Docs, or Facebook? Well…maybe…or maybe not. You see, data from a back-end server isn’t always just retrieved or propagated, it’s processed. Data needs to be aggregated, formatted, compared, curated, etc…In the normal development world, this still requires back-end server code and a database. I mean c’mon, can you really do this without SQL queries? Firebase is currently working on ways to help these common data processing needs with things like priorities (which let you set the weight of certain data requests), formatting methods (such as limiting the data with pagination), and transactions (which help you with data race conditions). But obviously there’s still a long way to go before application-specific back-ends are a thing of the past. Still, Firebase offers a path in the right direction, and I guarantee that it (or a similar service) will change the face of web application development across the internet. I can’t wait to use it myself :)



I thought I’d do a review of the most popular JavaScript frameworks (based on data from JSDB.io); please leave comments/feedback as this is all my personal opinion:

Angular - Yes, this is the most popular JavaScript framework right now. This MVC solution is definitely on the heavier side of development as it wraps HTML and JavaScript into fully re-usable components. I call it “Web Components Beta” because it has some very common features with Polymer: custom components, data bindings, imports, and templates are all in use in Angular (perhaps named differently there); however, the ‘shadow DOM’ feature, exclusive only to Web Components, is a lot more in line with the future of HTML development, and the heavily scrutinized and Google-backed Web Components implementations will win out eventually. Angular rejects imperative frameworks such as Backbone because they are less ‘hands-on’ the HTML, and because frameworks like Backbone encourage their technologies to be apart instead of using them all together. I disagree on this one, as technology soup is great for putting things together, but not for understanding them later.

Ember - Another MVC framework that tries to do all of the heavy lifting for you. Ember makes common MVC tools available to you, such as integrated handlebar templates. Ember has custom components, while its routing and models seem to be easy to use, much like the lightweight Backbone. From a quick glance, Ember seems to be right in between Angular and Backbone on the “heavily wired declarative HTML vs lightweight imperative JavaScript” spectrum of front-end MVC frameworks. Perhaps this is why they advertise it as easy to set up, yet powerful enough to get more advanced component reusability.

Select2 - This is a jQuery-based replacement for select boxes. It supports searching, remote data sets, and infinite scrolling of results. The power of this library comes from the ability to turn your user inputs into components which support tons of user manipulation (like search and multi-select) and make it easy to propagate/retrieve this data to/from server as well render/load results on the page. After having to do a lot of work for data-to-input-to-data implementations in the past, I am pretty excited about trying this library out. Oh yeah, and its inputs play nice with infinite-scroll :)

Backbone - This simple-to-learn imperative MVC framework is probably the easiest to learn and implement. As long as you know JavaScript, you can easily research how to throw some model and views together, add a router, and you are ready to go. I was able to drop this framework in and rewire one of my own projects in about 15 minutes. Its only hard dependency is underscore, another lightweight templating framework which works very nicely with Backbone. Combined with RequireJS, Backbone becomes an easy-to-implement, well organized MVC solution with a great separation of technologic concerns. Having the JavaScript separate from HTML means you have abstracted views which can be pulled apart and moved very easily, which gives developers some crucial flexibility in the later stages of a project. Of course, the downside is that the framework does not encourage reusable, easily testable components, but hey, after working with Java’s tapestry framework, I have seen “over-componentizing” lead to developers not understanding code/technology internals as well as having serious difficulties with refactoring without having system-wide side-effects.

Three - A lightweight 3D library for “dummies” :) This library provides HTML Canvas, <svg>, CSS3D, and WebGL renderers. It combines a lot of these great, modern rendering implementations into one easy-to-use framework and makes things like rendering a 3D cube simple. Check out all the truly awesome examples here: http://threejs.org/

Underscore - Known as “JavaScript’s utility belt”, it provides many great browser-compatible functions which are commonly used by a large number of JavaScript developers. The template function is especially useful for MVC frameworks using dynamic pieces of DOM to re-render elements on the page. Underscore functions are highly respected throughout the JavaScript community and are usually preferred to their normal ECMA5 counterparts. This lightweight framework is easy to use and often makes development cleaner and easier.

jQuery - Kind of amazed it is seventh on the list right now. I assume this is because everyone just takes it for granted, but it’s really #1 for me. I’m not going to leave a description for this one, because if you don’t already know what this does, then you need to hit the internet hard immediately!

React - Developed by Facebook (Instagram), its features include one-way reactive data-flow to simplify view-model interactions and virtual DOM for quick rendering changes. React developers encourage you to use this framework for even just the view part of your MVC solution; since React makes no assumptions about the rest of your technology stack, it’s easy to drop it into a small feature of your site. JSX, a syntax for turning JavaScript into more of a template (and back), is available to help with rendering. I would place React on the “heavily wired declarative HTML vs lightweight imperative JavaScript” spectrum right between Ember and Angular. Similar to Ember, it provides functionality for custom components, but goes ways beyond with optimizing DOM manipulation (which is its biggest advantage).

Modernizr - A JavaScript library which detects HTML5 and CSS3 features. After detection, it makes supported features known both through a JavaScript object and as classes appended to your <html> element. This makes it easy to write code conditionals based on features and to add specific CSS to regulate your site in a feature-responsive way. This is a widely popular library used on tons of sites.

Bower - A great package management solution which uses npm and Node.js under the hood. It exposes your package dependencies via an API to be consumed by your (and public) tech stacks. Bower is a widely used framework, which allows consumers to assemble packages and install components into their applications from around the web.

…also, since I mentioned this a couple times regarding currently popular MVC frameworks, here are the candidates laid out on the spectrum:

[lightweight imperative JavaScript] - Backbone - Ember - React - Angular - Web Components - [heavily wired declarative HTML]

Honorable mentions:
Video.js - easy to use for videos
Jquery UI - widgets built on top of jQuery
Less - programatic CSS
Code Mirror - in-browser code editor
typeahead - type-ahead

Special honorable mention:
RequireJS - I use this library in all of my projects because it makes all of your JS files/components easy to load and manage just like classes are in good back-end object-oriented languages. Just like that one commercial says, “I put that **** on everything!”



Recently I got a chance to attend a great presentation by Jonathon Colman of Facebook, called “Integrated Content Strategy”. He shared some great insights on what content really is, and what every product strategist needs to know:

First of all content strategy is not all about copywriting (which is writing copy for the purpose of advertising or marketing). A copywriter’s copy is meant to persuade someone to buy a product, or influence their beliefs; however, the goal of great content strategy is creating meaningful and unambiguous experiences. But the question remains: How do we talk about what we do? The answer is: In order to successfully plan creation, publication, and governance of content, we must first understand the content’s identity.

Content is really the entire experience, not just words, fonts, code, design, or ads. Just because you have rich content, doesn’t mean you provide the right experience. Content includes the entire team or service provided. It has a huge focus on feelings, and the strategy of content is really the strategy of relating to people. Content involves people, and people are political. For this reason, most CMS systems do not provide a great experience and are in bad need of content strategy.

Quite frankly, the core principle of content strategy is empathy. You have to understand people. Empathy is the antidote to politics, and it comes in many flavors. The voice and tone with which you serve content is very important. You need to be in tune with “feelings” from customers whenever possible. One way to master empathy in content strategy is to understand what people really value, and what they believe they are measured by. In essence, if you figure out what people want, you’ll be golden. Of course, this is not easy to do and requires extensive familiarity with people’s habits over time.

For content strategy, concept maps are a useful way to help organize your designs. This is because content strategists don’t just make “things”, they make systems which make “things”. If you’re doing it right, your customers should not be able to tell where content ends and where design begins. It’s also good to have consistent content templates. Having different views which essentially have the same functionality means tons of wasted time for your designers and developers. Consistency reduces your technical debt, which ultimately affects your budget. Strive to have standards for content that are governed and visible to everyone. This helps a lot when new content is added.

It’s often difficult to manage and audit your content inventory well. The hard part is trying to figure out what content do you and your customer both want out of the content which you actually have. It’s a tiny sweet spot, and choosing the wrong content can be political, and get you in trouble. A good idea is to always provide visual user experience metadata somewhere in your views. You can think of metadata as “a love note to the future”. You should invest in it when you can, because with strong metadata, you can build great APIs, and that’s very valuable.


Here are my thoughts on an Angular presentation I saw today:

First of all, at a glance, Angular looks like it is a solution which is between what some currently use (require, backbone, underscore) and the up and coming Web Components standards. My opinion is that current Web Components implementations are already cleaner and easier to use/build than Angular; so you can think of Angular as “Web Components Lite”, or “Beta”, and it might be worth just leap-frogging it, and get straight into stuff like Polymer.

I think a huge difference between backbone and Angular is how “heavy” they are: backbone is very lightweight and its models, views, and router are completely abstracted away from the DOM. This means you can move around models and views without worrying about how they plug into the DOM; on the other hand a “heavier” framework like Angular is completely tied to the DOM and is used in a declarative way. This means components are chunks of interwoven HTML and JS and are rather hard to separate. But this could also mean the chunks themselves can be easily re-used if implemented correctly. Again, I see Angular as something that Web Components will replace in the future anyway.

It did seem that using Angular removes the need for requireJS (replaced with dependency injection), backbone (DOM data bindings, inline controllers, and DOM-level scoping), and underscore (template partials and HTML render manipulation methods like ‘ng-repeat’). These are all very close to Web Components stuff like Shadow DOM, templates, and data bindings), but the main difference is web components work closer with native browser implementations as opposed to Angular’s engine or whatever which constantly has to run through the site and recompile.

…it might be useful to use Angular, but it takes longer to learn…I say stick with Backbone until Web Components completely come around, then perhaps try hybrids of both :)



If you haven’t heard about web components yet then take this opportunity to learn as much as you can about them, because they are ushering in a new age in web development. Led by the brightest minds at companies such as Google and Mozilla, web components are about to change the entire landscape of how you build your web sites. What we now call “web components” is a series of emerging W3C standards that allow developers to define custom HTML elements, and interact with them using the native DOM, as well as extend HTML and its functionality. They are based on specs for building UI components as custom HTML elements, which deal with high level app concepts and low level DOM manipulation. Essentially, they turn your DOM into your main development platform. Think of web components as interchangeable building blocks of websites, which can be pre-built for you. They let you render components via a single element mention, and all of the component’s internals such as HTML, CSS, and JavaScript is taken care of for you. This gives us a unified way to create new elements that encompass rich functionality and render as expected without the need for all of the extra libraries.

This is the new way to build HTML. Everything on the page can be a web component. You can make web components that do anything, even render 3D webGL code from a single DOM element. Their real power comes from the fact that all of their complexity is hidden. They take attributes and use them as params internally to ‘configure’ their behavior. Since they make HTML markup declarative, they are very easy to reuse. You can take any DOM segments, even combinations of components, and make new components with them. They are completely defined with simple nested tags, which makes it easy to compose elements together, even ones built by different libraries because the common language is the DOM. Overwhelming markup trees (referred to as ‘div soup’) can be replaced with new web component tags which also eliminate the need for tons of JavaScript. The new tags separate concerns cleanly, which helps make scalable applications. Currently, too many frameworks have their own ways of implementing new visual elements; web components, on the other hand, bring in standard component tags which only expose the relevant data (much like a select tag renders the entire select box for you without exposing the DOM details, or a video tag renders a video player without exposing the markup of the controls). Web components will help SEO by making more information available to crawlers via markup. Also a huge benefit of web components is that they extend web development from being just for programmers, to anyone who can use HTML.


Polyfills, also known as “shims that mimic a future API, providing fallback functionality to older browsers”, serve as a bridge to web components and have already been widely used for some time. Creating them, however, took a lot of work and they are still hard to use together. You can think of Polyfills as a layer over the current native browser elements. They bring you fallbacks and compatibility for new components across all browsers; however, their performance can be lacking. Polymer, a huge Google project, is a framework which was developed to serve as a layer on top of the existing polyfills and as a platform for new web components. It uses the latest web technologies to let you create custom HTML elements. Under the hood, Polymer uses a set of polyfills to help you create web components on all major browsers. It is simple to use, and allows us to create reusable components that work as true DOM elements while helping to minimize our reliance on JavaScript to do complex DOM manipulations and to render UI results. New Polymer elements are easy to create using templates, and existing elements include: animation, accordions, grid layout, ajax capabilities, menus, and tabs (it has over 100 of them). In fact, PlayStation4 used Polymer custom elements to build its UI. Polymer can be installed in separate modules, or even separate components (uses Bower).

“Introduction to Web Components” is a W3C spec which defines a set of standards for web components. It strives to go beyond what current CSS and JavaScript can provide for the DOM: Templates define chunks of markup which are inert but can be activated/used later. They use <content> tags wrapped in <template> tags to specify markup fragments. Custom Elements let you create new elements with new tag names and script interfaces. They use <element> tags and allow you to extend other HTML elements via prototypes (similar to JavaScript objects). Nicknamed ‘custom tags’, they can nest scripts and come with useful lifecycle callbacks. Imports define how to load web components from external files using the link tag. Finally, Shadow DOM, which encapsulates DOM trees for UI elements, is perhaps the most intriguing standard in the spec: You can connect shadow DOM to any element, and it will act like normal DOM, but it is actually not connected to its host element like normal DOM sub-trees are: During rendering, the shadow DOM is rendered instead of the real child nodes of the DOM element it is attached to. Furthermore, shadow DOM trees use <content> tags to match which specific children of the original DOM get rendered, and insertion points to specify where. Possibly the best feature of shadow DOM trees is that they are separated from the original DOM by a boundary which keeps CSS and JavaScript from bleeding through it. This effectively creates a scope for the shadow DOM which gives its authors a lot of control over how content inside interacts with the surrounding DOM. It allows us to encapsulate everything just like iFrames do, but in a much more controlled and cleaner manner. Data bindings between web components help them talk to each other (using mustache syntax); and allow non-visual components to serve as data processors that talk to visual components and share data with them. If you want to get started creating web components using these specs, there are boilerplate projects on GitHub to help you with this.

Besides Polymer, other popular web component platforms include X-Tag and Bosonic.  X-Tag is a small library made by Mozilla that brings web components’ custom element capabilities to all modern browsers. It lets you easily create elements to encapsulate common behavior or use existing custom elements to quickly get the behavior you’re looking for. X-Tag is actually built on top of Polymer polyfills, and includes some cool built components like Panel (mimics iFrame), Modal, and Map (Leaflet). Bosonic follows the web components spec closely and includes lots of components: collapsible, sortable, datepicker, dropdown, datalist, tooltip, accordion, draggable, toggle-button, tabs, autocomplete, resizer, dialog, selectable, and flash message.


There are tons of cool examples of web components in action today. One such example is the combination of a Reddit element (which grabs data) and an AJAX component, which together easily read from a Reddit site using AJAX and post data back into your DOM, all without JavaScript! You can take a google map element which someone has made, add a marker elements which someone else has made, and the component know how to talk to each other and rendered together harmoniously. Some have made local storage elements, which store data in tags. I saw an amazing designer interface which lets you piece custom elements together visually in a sandbox, with configurable data bindings, all used without even looking at the markup. component.kitchen is a gallery website with tons of component examples. customelements.io is a website with a huge list of custom web components, even sorted by popularity. Finally, webcomponents.org is a simple, neutral site/community devoted to encouraging good best practices for web components.

Web Components are not fully supported on Safari and Internet Explorer yet; however with Polymer, web components are supported in the two latest versions of all modern browsers except for IE8, IE9, and Android. Chrome Canary is currently the best browser for use with web components.


Last week I got to see a presentation by Ilya Grigorik based on his book, High Performance Browser Networking. Ilya Grigorik is a web performance engineer and developer advocate on the Make The Web Fast team at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices. Although the networking side of web technology is extremely fascinating to me, my knowledge of it has been limited to Network Essentials classes from college coupled with sporadic wikipedia lookups. Ilya changed a lot of this for me with this presentation, and I am truly amazed at the technological achievements in networking that are coming down the global web pipeline this year.

There has been a lot of changes in recent years that have made sending and receiving data faster, especially on the client side; however, 70% of the request lifecycle is still spent in the main bottleneck: the network. To even get to the application, data has to transform through network protocols such as HTTP, TLS, TCP, and IP;  and be transmitted through mediums such as cable, radio, or wi-fi. For a mobile phone, the average data return trip time (phone to radio network to core network to public internet to server and back) is about 100 milliseconds. In the US it’s almost twice as fast. The performance of such a request is based on both bandwidth and latency, but latency is the real issue. Even if it could travel at the speed of light, an HTTP request is still slowed down at the “connection points” by DNS lookups, socket connects, and content downloads.
Meanwhile, user patience is on the low end, with any delay over a tenth of second considered “sluggish”, while a whole second will cause a mental context switch. ISP carriers love to advertise bandwidth as “speed’, but the real measure of network speed is latency. Internet packets can arrive in San Francisco from New York City in 21 milliseconds. To give you an idea of how already fast that is, the same trip would take 14 milliseconds at the speed of light (it takes 133.7 milliseconds for light to go completely around the world). So internet packets are traveling two-thirds of the speed of light thanks to optical fiber cable and advances on making it even faster are already in the works. Unfortunately, the latency from the ISP to your router is another 18 milliseconds.

Furthermore, every TCP connection begins with a handshake and no data can be sent until this handshake is complete. TCP Slow-Start is a technique used on the server response to limit how many segments of data are sent back. It starts with a limit of four segments, which doubles on every subsequent response. This means every round trip can’t send more data segments than what was sent the trip before, which is done as a way to control data traffic so that the “pipes aren’t stuffed”. While this technique is essential in trafficking internet data, it is a huge problem for latency because it is enforced on every new connection. Fortunately HTTP Keep-Alive keeps the last congestion window rate up even after a long pause, and the number of starting segments has been upgraded from 4 to 10. Still, the average web page has 100 resources fetched from 11 different servers. Web Sockets are useful here because they use the same underlying TCP route and rules for every message after opening a connection.
Finally, the biggest thing happening in networking this year is HTTP 2.0. In fact, SPDY, the first draft of HTTP 2.0 is already implemented on all modern browsers. Current HTTP 1.0 techniques such as domain sharding, concatenating files, spriting images, and resource inlining will be improved upon or deemed unnecessary with HTTP 2.0 soon replacing SPDY. With HTTP 2.0, internet data will be multiplexed, prioritized, and streamed over a single TCP connection, allowing servers to send the most important data first, with the order of transmission of data only mattering within each individual stream. This will allow users to open as many streams as they need over the same connection. HTTP 2.0 uses a binary framing layer, flow control, server push, and header compression to reach it’s goal of cutting down internet wait time, and should be implemented in most modern browsers by the end of the year!



With the likes of Facebook and Twitter using cards as the main way to display web content, the time to evolve how users experience the web has come again. Interactive web cards of today not only display information to the user, but also lure the user into playing with them. Users get caught up in a moment of exploration that has them focusing on an already familiar card template and playing with the accessories of its content. Forms of this include: adding comments which appear below an image, sharing or voting on the content with quick finger taps or clicks, watching a video or an interactive slideshow, and even playing a game right on the card. The familiarity with the card container will guide users in their exploratory experience and keep them engaged, but past that, what you put inside the card template is up to you. If you think about it, there is really no limit to what you can place in a card for users to play with. You just need to have the proper content management tools to support building such cards in a way that gives you the most room to be creative while keeping a familiar card experience for the user.


ThisMoment is currently developing an environment that will let you not only take all of your content and display it in card templates, but also allow you to use custom applications inside the cards themselves. Developers will be able to build their own custom card applications or download them from a store, and then drop them right into their content environment with ease. It will be up to the developer whether they use already built card templates or display content in an entirely different interactive way. This will allow you to put the power of card interaction right into your users’ hands, custom tailored for your brand’s desired experience, while adhering to our powerful interactive card environment. As a result, these interactive card applications should help drive more purchase intent from ‘interested’ to ‘buying’, especially when it comes to content associated with a product.

Content Cloud, which could already be available by the time you read this, is a powerful stepping stone to such an environment. It not only gives you industry-grade card content management features, but it could also pave the way for creating more advanced card-playing experiences in the future. Its innovative card content approach, coupled with a sturdy platform which took years to perfect, make it a great asset for all brands. The card content experience is not exclusively for the Facebooks and the Twitters anymore. Content Cloud makes it easy to gather all content relevant to your brand, and bring it to your users in the form of card content playlists. But this is only the beginning.


In the future, cards will be more engaging, more interactive, and more meaningful to the end user. Card interactions could include new ways for users to ‘handle’ them such as having the ability to flip cards visually and see accessory content on their backs. Card templates could come with custom card widgets, customizable by site admins on the fly. Live updating components on the cards such as streaming conversations, tickers, and notifications could really bring the cards to life. Even collecting the cards in users’ own ‘card decks’ could become a reality. Playing with cards will truly become an awesome user experience.



What language exists on every single smart device these days? If you said JavaScript, then you are correct. Throw in HTML5 and CSS3 and you’ve got yourself an application that can pretty much do whatever you want…as long as there’s a browser environment there to make it happen. When it comes to app development, we live in a world where the most successful applications are the ones that are most supported across all environments and are built with the most support from all communities. Therefore, it only makes sense that the operating system of the future is one which is a web-based platform. Last week, I saw an inspiring presentation with some co-workers at Yelp HQ, put on by a guy named Nick Desaulniers (@LostOracle). He opened up the event by asking everyone these great questions:

Is the browser the first thing you launch on your desktop?
Can your browser do everything your desktop can?

To the first one I answered yes immediately, and to the second question…well I’m hoping to answer yes to that one as soon as possible. Under the covers, the presentation had a lot to do with Firefox OS, a Linux kernel-based open-source operating system for mobile devices; however, Nick didn’t exclusively make the presentation about Firefox OS (developed by non-profit Mozilla), and he didn’t have to. The idea which Firefox OS is based on, is novel enough to present on its own: A “complete” community-based alternative system for mobile devices, using open standards and approaches such as HTML5 applications, JavaScript, a robust privilege model, open web APIs to communicate directly with cellphone hardware, and application marketplace. This definitely got my wheels spinning. If PhoneGap was so successful in re-creating existing apps in web technologies and then porting them to every platform, then why couldn’t there be an entire operating system for this. Actually, if everyone switched to an operating system such as this, then everyone could code together in harmony. I’m not talking about “my code is better than yours” here. I’m talking about “let’s all speak the new web language, after all, it’s what everyone already knows”. Firefox OS’s project proposal was literally to ”pursue the goal of building a complete, standalone operating system for the open web in order to find the gaps that keep web developers from being able to build apps that are – in every way – the equals of native apps…”


Of course, a platform such as this requires a few things: new web APIs to expose device and OS capabilities such as telephone and camera, a privilege model to safely expose these to web pages, applications to prove these capabilities, and low-level code to boot on an Android-compatible device. But the standards-based open web has the potential to be a competitive alternative to the existing singe-vendor application development stacks offered by the dominant mobile operating systems. I mean, everything can run in JavaScript! Furthermore, existing vendor-specific apps can be repackaged in a web technology stack (again, think PhoneGap), and are then ready to be used on such an operating system. People have gotten so used to using web technologies to build internet applications, yet they often fail to notice that these technologies are powerful enough to do anything their desktop can (in theory). You can run JavaScript without an internet connection, well folks, same goes for making apps.

The problem with current mobile operating systems is that they are at odds with each other. A web-based platform which completely uses open standards without proprietary software involved, on the other hand,  is much more accessible to every developer. If the software stack is entirely HTML5, then there are already a large number of established developers. If we can bridge the gap between native frameworks and web applications using W3C standards, then we can enable developers to build applications which could run in any standards compliant browser without the need to rewrite them for each platform. But are the current web technologies up to the task to do this? JavaScript has gotten 100 times faster since 2006. Many existing applications were made on widely used high-perfomant languages such as C, but there are numerous, efficient conversion methods from C to JavaScript. In fact, JavaScript is the most targeted language today for converting applications. JavaScript vendors are now optimizing their JavaScript engines, a necessary step for the convergence of web technologies and native operating systems. Of course, we can’t forget the maliciousness that these inter-operating-system apps can cause. Which is why the browser operating systems need to have powerful permission models. The operating system says, “Sure we’ll let you in if you speak our web language, but that doesn’t mean you can do whatever you want buddy!” This is why browsers like Chrome have implemented “process isolation” patterns. HTML5 apps use application based permissions, which have worked in practice, but their standardization process is slow because of the numerous scenarios they entail.

In all of this hype, there are also the doubters. People have complained that web technologies just can’t deliver the same graphically seamless and error-proof user experience that native apps can. I have seen this firsthand, but I am not foolish enough to think that this will always be the case. With all of the current innovative, collaborative, and even competitive drive to make the best operating system experience possible out there,  I can’t wait to see the browser operating systems of the future. While FirefoxOS is just a first run at this, it is truly a driving force in the web versus native war, which as we all know, web is going to win, hands down :)



When I think of content playlists, the first thing that comes to mind is iTunes. This application became the most popular playlist organization tool after iPods and iPhones became super popular. And why not? It allowed you to easily upload (burn) all of your favorite songs right off of any CD in your collection (you guys remember when you had CDs right?), and put them into one easy-to-manage library. Once inside this library, your media can be organized and analyzed in numerous ways, helping you figure out what you want to do with it. To me the most interesting part of all this was creating the “smart” and “dumb” playlists, which you would use to upload the media to devices, where it would be listened to by its target audience: you.

Having multiple playlists at your disposal allows you to quickly make important decisions as they are needed. Sometimes it’s an all out “best-of” day, sometimes we want to listen to the new stuff which just became available. Configuring “smart” playlists is easy in iTunes, because of the filtering system. Filters can help you narrow the results of your playlists and act as metadata rules which can be stacked on top of each other, as well as easily removed. It’s a system that can produce sophisticated playlists while using an approach which is easy to manage. Of course, you always have the option of going with the “dumb” playlist, in which you pick every piece of content yourself; however, this can get tedious if there are massive amounts of content to chose from.

The smart playlists in iTunes are also “live updating”, which is a very attractive feature for those looking for a more streamlined experience. By having the the application search and insert new content right into your playlists from the ever-changing library, the user starts to turn his playlists from merely being collections to becoming fully automated media stations. The idea here is that playlist selection goes from being tedious and predetermined (think YouTube) to automated and responsive (think Pandora). The best part about having “live updating” playlists is that you can taylor which content you actually want to keep or discard by experiencing and reacting to it (star ratings, up/down thumbs, votes, etc…).

Even when these smart playlists or stations start to repeat somewhat, the order of the content is still shuffled and randomized to keep interest. A big difference to note between iTunes smart lists and Pandora stations is that smart lists grab content based on a series of metadata rules, whereas stations tend to artificially create these filters for you from any one piece of content of your choosing, completely abstracting this step from the playlist administrator. YouTube has a different approach completely: They do not have any “smart” playlists or stations, but they do try to recommend you to experience (and possibly add) related content where applicable, and that’s pretty much everywhere you find content.

P.S. Check out this “XML Shareable Playlist Format”, which has been around since 2004:

<?xml version=”1.0” encoding=”UTF-8”?>
<playlist version=”1” xmlns=”http://xspf.org/ns/0/”>
      <title>Windows Path</title>
      <title>Linux Path</title>
      <title>Relative Path</title>
      <title>External Example</title>