Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline

How to replace the Rails asset pipeline with a Grunt-based system: Part 1 of our build and deploy process.

This is the first in a two-part series. See Part 2 of our build and deploy process

Like any good startup, we try to leverage off-the-shelf tools to save time in our development process. Sounds simple enough, but the devil is in the details, and sometimes a custom solution is worth the effort. In this post, I’ll describe how and why we replaced the Rails asset pipeline with a Grunt-based system.

In the Beginning…

Early on, we embraced AngularJS as the foundation of our core application. We started prototyping using the Yeoman project and never looked back. If you’ve never used this project before, I highly recommend checking it out. It will save you time and tedium in setting up a development ecosystem. We fell in love with the Bower and Grunt utilities as a way to manage project dependencies and build pipelines, and we found the array of active development on the various supporting toolsets impressive. We were knee deep in NodeJS land at this point.

After we stubbed out a good portion of the UI on mock data, we had to start looking towards building out an API that could take us into further iteration. Ruby on Rails was proven and familiar, and we knew how to carve out a reliable backend in no time flat. Additionally, we wanted to take advantage of some proven RubyGems to handle common tasks for which the NodeJS web ecosystem hadn’t fully established itself. Some of these tasks include handling view responsibility, and as such relied on Sprockets for asset compilation.

At this point, we had an AngularJS project, built and managed with Grunt, contained within a Rails project, built and managed with Rake and Sprockets.

Trouble in Paradise

We quickly found ourselves hitting a wall trying to manage these two paradigms. As have several others.

Our hybrid Grunt + Sprockets asset pipeline included multiple build processes and methods of shuffling assets. The more we tried to get these two jealous lovers to play nice, the more they fought. Ultimately the final straw came down to minification-induced runtime errors and the lack of sourcemap compilation support in Sprockets (while somewhat supported in an on-going feature branch, sourcemaps hadn’t made it into master and required dependency changes we weren’t ready to make quite yet).

At this point it became apparent that we were wasting precious cycles dealing with things outside our core competency, and that we needed to unify these pipelines once and for all.

Unification

Our solution: say goodbye to Sprockets! We have completely disabled the traditional Rails asset pipeline, and now rely on GruntJS for all things assets-related. The deciding factors for us were the community activity and the flexibility the project provided. Here’s a Gist of our (slightly sanitized) Gruntfile.js powering the whole pipeline.

How we currently work:

  • We don’t use the Rails asset helpers…at all. We use vanilla HTML for our views as much as possible. Attempts to use the Rails asset helpers ended up being overly complex and ultimately felt like trying to work a square peg into a round hole.
  • We reference the compiled scripts and styles (common.js, app.js, main.css, etc) directly in our Rails layouts.
  • Grunt build and watch tasks handle the the pipeline actively and passively. In development, we use the wrapper task grunt server to launch Rails along with our watches. Source and styles are compiled and published directly to Rails as they are saved. Likewise, unit tests are run continually with output to console and OSX reporters.
  • LiveReload refreshes the browser or injects CSS whenever published assets are updated or otherwise modified.
  • We no longer require our Rails servers to perform any sort of asset compilation at launch, as they’re now built by CI with the command grunt build prior to deployment. Nothing structural in our build deployment process has changed (in our case, using Bamboo to deploy to Elastic Beanstalk).

With the above, we are now constantly testing using the assets that actually make it into a production environment, with sourcemap support to handle browser debugging sessions. Upon deployment, Rails instances do not need to pre-process static assets, reducing warm-up time.

Ultimately, the modular nature of the Grunt task system ensures we have a huge array of tools to work with, and as such, we’ve been able to incorporate all the nice little things that Sprockets does for us (including cache-busting, and gzip compression) and the things it doesn’t (sourcemaps).

DIY

Feel free to steal our Gruntfile.js if you’re looking to adopt this system. We’ve also cobbled together a list of Grunt tasks that we’ve found helpful:

  • grunt-contrib-watch – the glue that binds automated asset compilation together.
  • grunt-angular-templates – allows us to embed our AngularJS directive templates into our javascript amalgamation. Also useful for testing.
  • grunt-contrib-uglify – handles all JS concatenation, minification, and obfuscation. Despite adhering to AngularJS minification rules, we’ve found issues with the mangle parameter and must disable that flag when handling Angular code. Uglify2JS is also providing our sourcemaps.
  • grunt-contrib-compass – we only author SCSS and rely on Compass to handle everything concerning our styles, including compilation and minification as well as spritesheet and sourcemaps generation.
  • grunt-autoprefixer – …except we don’t bother writing browser-specific prefixes. Instead we use autoprefixer to automatically insert them. The recent version supports sourcemap rewrites.
  • grunt-cache-bust – renames assets to CDN friendly cache-busted filenames during distribution.
  • grunt-contrib-jshint + grunt-jsbeautifier – keeps our code clean and pretty.
  • grunt-karma – is constantly making sure we write code that works as intended.
  • grunt-todos – reminds us not to litter.  =]

Learn more about our build and deploy process in Part 2 of this series.

We hope this guide helps others trying to marry these two technologies. Please feel free to contribute with suggestions for future improvements via GitHub or Twitter!


We just launched our first product! Learn more about Smartly at https://smart.ly.

Questions, comments? Follow us on Facebook or Twitter.

Headless integration testing using capybara-webkit

We use Cucumber for integration testing our Rails servers, and by default all Cucumber scenarios tagged with “@javascript” pop up a browser. We needed to get this running headless so we could run these tests on our build machine. We use the Atlassian suite, and Bamboo for CI, running on EC2.

This post is for developers or sysadmins setting up Rails integration testing on a CI system like Travis, Hudson, or Bamboo.

We use Cucumber for integration testing our Rails servers, and by default all Cucumber scenarios tagged with “@javascript” pop up a browser. We needed to get this running headless so we could run these tests on our build machine. We use the Atlassian suite, and Bamboo for CI, running on EC2.

The de facto way of running headless tests in Rails is to use capybara-webkit, which is easy to install and run locally following the guides here.

Capybara-webkit relies on Qt, which is straightforward (though slow) to install on OS X, which we use for development. Our build box however is Amazon Linux, which is supposedly a distant cousin of CentOS. We’re using Amazon Linux because Bamboo OnDemand provides a set of stock Amazon Linux AMIs for builds that we have extended and customized.

We started out following the CentOS 6.3 installation guide from the capybara-webkit wiki above but quickly encountered problems because Amazon Linux doesn’t ship with a lot of libraries you might expect from Redhat or CentOS, like gcc and x11.

Here are the steps we followed to get Qt installed and our headless Cucumber tests running on our Bamboo build machine. This installation process was tested on ec2 AMI ami-51792c38 (i686).


# First install dependencies listed on http://qt-project.org/doc/qt-4.8/requirements-x11.html that do not ship with amazon linux amis.
# If you dont do this, ./configure below will fail with errors like "Basic XLib functionality test failed!"

yum install -y gcc-c++
yum install -y libX11-devel
yum install -y fontconfig-devel
yum install -y libXcursor-devel
yum install -y libXext-devel
yum install -y libXfixes
yum install -y libXft-devel
yum install -y libXi-devel
yum install -y libXrandr-devel
yum install -y libXrender-devel
 
# download, configure, and install qt from source
wget http://download.qt-project.org/official_releases/qt/4.8/4.8.5/qt-everywhere-opensource-src-4.8.5.tar.gz
tar xzvf qt-everywhere-opensource-src-4.8.5.tar.gz
cd qt-everywhere-opensource-src-4.8.5
./configure --platform=linux-g++-32
 
# caution: this will take a long time, 5 hrs on an m1.small!
gmake
gmake install
 
# add qmake location to path
export PATH=$PATH:/usr/local/Trolltech/Qt-4.8.5/bin/
 
# now finally gem install will work!
gem install capybara-webkit

Curious about Pedago online education? Enter your email address to be added to our beta list.

Questions, comments? You should follow Pedago on Twitter.

Pedago releases 3 AngularJS projects to the open source community

In the past week, Pedago has released 3 open source projects on our github page.

In the past week, Pedago has released 3 open source projects on our github page.

iguana

Iguana is an Object-Document Mapper for use in AngularJS applications.  This means that it gives you a way to instantiate an instance of a class every time you pull down data over an API.  It’s similar to Ruby tools like activerecord or mongomapper.

super-model

Iguana is dependent on super-model, which should someday include much of the functionality that activemodel provides for Ruby users.  For now, however, it only provides callbacks.

a-class-above

Both iguana and super-model depend on a-class-above, which provides basic object-oriented programming (OOP) functionality. A-class-above is based on Prototype’s class implementation, and also provides inheritable class methods and some convenient helpers for dealing with enumerables that are shared among classes in an inheritance hierarchy.

This is our first foray into the management of open-source projects, so we’ll be learning as we go along.  We’re trying hard to make these useful to the community, so we have packaged them up as bower components and spent time writing what we hope is useful documentation.  We used groc for the documentation and focused on documenting our specs in order to provide lots of useful examples, rather than documenting each method in the API.  We hope that this will be more helpful than more traditional API documentation would have been, and would love to hear comments on how it’s working for folks.

We hope that other AngularJS users will find iguana, super-model, and a-class-above to be useful and decide to contribute.

Enjoy!


Curious about Pedago’s interactive education? Enter your email address to be added to our beta list.

Questions, comments? You should follow Pedago on Twitter.

Wanted: Digital Marketer

Pedago is looking for a scrappy, metrics-driven marketer capable of working across multiple digital channels to drive growth for an early-stage consumer subscription tech startup. If you’re excited by the prospect of working in an agile environment with an experienced product team, read on!

The best candidates will have exposure to and/or direct experience in email marketing, SEM/SEO, and social media. You should be comfortable partnering with leadership to set a marketing budget and define channel strategies. Day to day, you will define experiments, manage PPC, email, and social media campaigns, and track progress toward lead/revenue goals.

This is a full-time salaried position and will be based out of our Arlington, VA office.

Pedago has a plan to deliver engaging, interactive educational content across a multitude of topics. We believe lifelong learning for adults is an underserved market, and we’re excited to begin testing our unique approach in the coming months.

Interested? We’d love to talk to you. Email us at jobs@pedago.com.

 

Learning Game Trees and Forgetting Wrong Paths

This is the second of two blog posts delineating the pedagogical approach of Herb Simon, the man credited with inventing the field of artificial intelligence, for which he won a Turing award in 1975. (Read the first post here.) Simon was a polyglot social scientist, computer scientist and economics professor at Carnegie Mellon University, He later won the Nobel Prize in 1978 in economics for his work in organizational decision-making.

Game Tree
Tic Tac Toe Game Tree, Gdr from Wikimedia Commons

Dr Simon would often tell his students that he liked to think about human learning as a game tree: when you start out learning about a new topic, you begin at the root of the tree with what you already know, and follow connections to related topics, discovering new “nodes” in the tree. You employ a variety of search strategies to follow connections both broadly and deeply through related topics, loading as much of the explorable tree into memory as possible. As you discover and master each “node” on the tree, you learn which branches of the tree are fruitful and which are fruitless.

During and after exploration though, the entire game tree remains in your working memory, slowing you down. When you take breaks, not only are you relaxing, but you are also forgetting wrong paths – pruning those fruitless branches from your working memory. When you next return to the task at hand, you resume exploring connections and mastering concepts not at the very top of the tree, but in the most fruitful subtrees where you left off, making better use of your working memory.

At Pedago, we believe in learning by doing, and we want to break complex topics and concepts down into what Seymour Papert in the book Mindstorms calls “mind-sized bites.” One of the benefits of breaking complicated topics into “bites” is that it is easier to build learning content that learners can work through when they only have a few minutes free, on whatever device they have on hand.

As we build our database of short concepts and lessons, we find ourselves also building a rich tree structure of topic relation metadata that in structure is not unlike Simon’s game tree of learning. A nice side-effect of a learning solution with rich, encapsulated, short lessons is that you don’t have to commit to a thirty minute video – you can learn in bits and pieces throughout your day. And by doing this, you are unintentionally building and then pruning your learning game tree in an efficient way, forgetting wrong paths and making the best use of your working memory each time you return to your lessons.

 

Today’s Inspiration from Jiddu Krishnamurti

children with teacher
Sergey Prokudin-Gorsky [Public domain], via Wikimedia Commons

“There is no end to education. It is not that you read a book, pass an examination and finish with education. The whole of life, from the moment you are born till the moment you die is a process of learning.”

~Jiddu Krishnamurti

Krishnamurti on Education, Conversation 43

Herb Simon on Learning and Satisficing

This is the first of two posts delineating the pedagogical approach of Herb Simon, credited with inventing the field of AI, for which he won a Turing award in 1975.

This is the first of two blog posts delineating the pedagogical approach of Herb Simon, the man credited with inventing the field of artificial intelligence, for which he won a Turing award in 1975. Simon was a polyglot social scientist, computer scientist and economics professor at Carnegie Mellon University. He later won the Nobel Prize in 1978 in economics for his work in organizational decision-making.

Herbert Simon in front of blackboard
Herbert Simon, Pittsburg Post Gazette Archives

“Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.” –Herb Simon

Among his many accomplishments, Herb Simon was a pioneer in the field of adaptive production systems. He also identified the decision-making strategy “satisficing,” which describes the goal of finding a solution that is “good enough” and which meets an acceptability threshold, as opposed to “optimizing,” which aims to find an ideal solution.

Simon believed that human beings lack the cognitive resources to optimize, and are usually operating under imperfect information or inaccurate probabilities of outcomes. In both computer algorithm optimization and human decision-making, satisficing can save significant resources, as the cost of collecting the additional information needed to make the optimal decision can often exceed the total benefit of the current decision.

We live in a world where overwhelming amounts of information are at our very fingertips. Every month new educational software offerings are on the market. You can find tutorials to fix anything in your house, learn a new language for free, find lessons that teach you to dance, and watch video lectures from top universities in the topics of your choice.

I like to think of myself as a polyglot learner: I would love nothing better than to just take a year, or two, or ten, and learn as much as I can about everything. But unfortunately, I have limited time. How do I know which tutorials, lessons, and classes are worth the commitment of my time? How can I find a satisficing solution to the problem of becoming a more well-rounded learner and human being?

In Simon’s words, “information is not the scarce resource; what is scarce is the time for us humans to attend to it.” At Pedago we’ve been inspired by thinkers such as Simon to build a learning solution that makes the most of the scarce resource of your time, by employing curated streams of bite-sized lessons; rich, explorable connections between topics; interactive learn-by-doing experiences; and just the right amount of gamification. We want to enable you to craft your own learning experience, so that you can, as Simon would say, positively influence what you do and what you think.

Stay tuned for the second post in this series as we examine Simon’s modeling of human learning.