Five Simple Steps to Push Your Application to the Top

In just a few simple steps you can make a huge difference in getting your application noticed when applying for a new job.

 

As you may have noticed on the Smartly blog, Pedago is currently on the hunt for motivated content developers and back-end engineers to help bring Smartly’s bite-sized, interactive courses to life. Part of my job is to review and prioritize the diverse writing samples and resumes submitted by our eager content developer applicants. In doing so, I’ve noticed that taking just a few simple steps can make a huge difference in getting your application to rise to the top when applying for a new job. While the items on this list may seem obvious, it’s never a bad idea to review the basics so you can focus on being the number one, standout applicant.

  • Proofread your resume. If your job requires “attention to detail,” make sure that’s reflected in your resume; you’ll stand out in a bad way if you misspell the word “meticulous” while listing it as one of your top five traits! What better way to impress your hiring manager (and give them a peek into your future work at the company) than to showcase a resume that is ship-shape and devoid of typos and grammar errors. This absolutely goes for personal websites and your LinkedIn profile as well—if you’ve provided links to an online resume or portfolio, make sure it represents your best work.
  • Read the job posting. Perhaps your future employer is looking for a cover letter; maybe they want you to apply via a special link, or they’ve provided specific instructions for a required skills test. Regardless of the instructions provided, it’s crucial that you demonstrate your interest in the position and ability to understand instructions by following them to a T. It’ll help you make a great first impression!
  • Do your research. Spend a little time on your potential employer’s website. Check out the current employees, do a little digging into the company mission, and try to figure out their main focus as a company. If you’re able to do so for a minimal cost, try out the company’s product! This should be done before you apply—how else will you be able to tell if you really want to apply?
  • Focus on what you bring to the position. Once you’ve taken the time to read the job posting and consider how your skills might be a great fit, and then tell us about it. You can focus on the compensation details once you have a job offer in hand. We know you’re looking for decent pay, fair vacation time, maybe flexible work arrangements, and more. But, you can leave those details until later, once you’ve determined that you’re a good match for the position itself.
  • Be polite, even if you don’t get the job. Employers often have a large pool of candidates for each job posting. You may never know whether you were the second choice or last in line for the position, but you can guarantee that your resume will never get a second chance if you reply to a rejection notice in a negative fashion. Keep it positive, and keep your chances of scoring a position on the next go-around!

While this is certainly not an exhaustive list of all the things to keep in mind when applying, it’s a great place to start. What other tips would you add to the list? Happy hunting!

 

*photo credit: http://deathtothestockphoto.com

The Blue Ocean Mind

In my third year of running Rosetta Stone as CEO, I opened the book Blue Ocean Strategy, and it was a pivotal moment.

 

Ten years ago, I picked up a copy of Blue Ocean Strategy at an airport bookstore.

At the time, I was in my third year of running Rosetta Stone as CEO and we were enjoying annual growth rates of close to 100%. While still operating out of a converted seed warehouse in rural Harrisonburg, Virginia, I engaged consultants and advisers who invariably asked me: “Who’s your competition? What are your competitor’s strengths? How can you stay ahead or catch up?” or “Do you know how big the language learning industry is today? How fast is it growing? How can you gain market share?” Those weren’t the things we were focused on! We were thinking about how to build an interesting, enduring and delightful company.

We were an emerging company and were…well…a bit odd. Our price point was tenuous (twenty times the cost of rival language learning software). Marketing spend also seemed unsustainably dominant (with sprawling kiosks and crazy-high ad spending), all managed without an integrated media plan. Indeed, we were unfocused in terms of our end markets, offering the same curriculum in 25+ languages to the US Army, Fortune 500 companies, school districts, homeschoolers, and individual consumers. We were a legacy of seemingly illogical decisions (we were told) in need of a strategy to become more competitive.

And yet, we had just become the #1 company in the US language learning industry by revenues overtaking the long established brick-and-mortar based Berlitz. We were profitable, and were one of the fastest growing companies in the nation. We certainly did not feel like we were all wrong—even if we didn’t have it together in all sorts of ways. We were doing well, and enjoying the ride.

Discovering Blue Ocean Strategy was a pivotal moment for me. It provided a framework for what we had been doing and explained why our independent and unusual approach was working. It spurred me to hone our strategic approach as we evolved new innovative offerings and business models. And as colleagues also became familiar with Blue Ocean Strategy, the powerful concepts became part of our common parlance across the company, inspiring product designers to re-think English language training in Asia, while also helping with on-boarding new collaborators who typically wanted to teach us their more reasonable way of focusing on beating the competition.

While not every single Blue Ocean Strategy turned to gold, it was the right framework for designing solutions to age-old problems. Like anything in life, it will not work every time and reality is unpredictable. But it is a wonderful way to approach work and life in general—a license to do what you think is right, and to stop wasting time on stuff that you don’t think is required. It is what explains success such as Tesla, Cirque du Soleil and IKEA—and how they escape the traditional competitive mindset that is so limiting and even exhausting. If you haven’t yet used the Blue Ocean Strategy framework to think about your company and life, please do so! You’ll be happier for it.

As one of our first courses at Pedago, we’ve developed a quick intro to Blue Ocean Strategy via our new platform Smartly. Whether it is your first contact with the framework or more of a refresher, with Smartly, you’ll breeze through it!

And in the process, you’ll get to see what Alexie, Ori and I, and the rest of the Pedago team, have been up to over the past couple of years. We think we’ve come up with a powerful new way to teach using technology, and we hope that it works for you. In the future, we’ll develop many more courses using this platform and technology.

Our solution is designed for the smartphone, and works great on desktop and tablet. And there aren’t any plans for a CD-ROM or any bright box packaging! So get going and escape the red ocean by going to https://smart.ly/blue-ocean-strategy.

May Blue Ocean Strategy become your team’s strategic lingua franca!

Git Bisect Debugging with Feature Branches

Inspectocat, courtsey of GitHub
Inspectocat, courtesy of GitHub

At Pedago, we follow the GitHub Flow model of software development. Changes to our app are made in feature branches, which are discussed, tested, code reviewed, and merged into master before deploying to staging and production. This approach has become pretty common, and in most cases does a good job of balancing our desire to ship quickly with the need to control code quality.

But, what happens when a bug inevitably creeps in, and you need to determine when it was introduced? This article describes how to apply git bisect in the presence of numerous feature branches to quickly detect when things went awry in your codebase.

Enter Git Bisect

git bisect is tool for automatically finding where in your source history a bug was introduced. It saves you the pain of manually checking out each revision yourself and keeping a scratchpad for which ones were good and bad.

Here’s how you get started:

# start up git bisect with a bad and good revision
git bisect start BAD_REVISION GOOD_REVISION

At this point, git is going to start checking out each revision and asking you if the commit is good or bad. You tell git this information by typing git bisect good or git bisect bad. Git then uses binary search (bisecting the history) to quickly find the errant commit.

You can also further automate things by giving git a script to execute against each revision with git bisect run. This allows git to take over the entire debugging process, flagging revisions as good or bad based on the exit code of the script. More on this below!

Example

Imagine you go back to work from a vacation and discover that the Rails specs are running much more slowly than you remember before you left. You know that the tests were fast at revision 75369f4a4c026772242368d870872562a3b693cb, your last commit before leaving the office.

Being a master of git, you reach for git bisect. You type:

git bisect start master 75369f4a4c026772242368d870872562a3b693cb

…and then for each revision git bisect gives you, you manually run rake spec with a stopwatch. If it’s too slow, you type git bisect bad, and if it’s fast you type git bisect good.

That’s kind of monotonous, though, and didn’t we mention something about automating things with a script above? Let’s do that.

Here’s a script that returns a non-zero error code if rake spec takes longer than 90 seconds:

#!/bin/bash

start=`date +%s`
rake spec
end=`date +%s`

runtime=$((end-start))


if [ "$runtime" -gt 90 ]
then
    echo TOO SLOW
    exit 1
fi

echo FAST ENOUGH
exit 0

Let’s say you save this script to /tmp/timeit.sh. You could use that instead of your stopwatch and keep manually marking commits as good and bad, but let’s go further and have git bisect do the marking for us:

git bisect run /tmp/timeit.sh

Now we’re talking! After waiting for a bit, git tells us that the errant revision is:

31c60257c790e5ab005d51d703bf4211f43b6539 is the first bad commit
commit 31c60257c790e5ab005d51d703bf4211f43b6539
Author: John Smith john@example.com
Date: Wed Jan 21 12:02:38 2015 -0500
   removing defunct jasmine-hacks.js
:040000 040000 94ff367b586ec62bacb3438e0bc36ae62f90da22 bd3b447e7fc8ce782a7a4c01d11d97383bf06309 M karma
bisect run success

OK, so that sounds good. But wait, that’s a commit that only affected javascript unit tests! How could that have caused a problem with the Ruby specs?

Damn You, Feature Branches

The problem is that git bisect is not confining itself to only the merge commits in master. When it narrows down the point in time when things got slow, it isn’t taking into account the fact that most revisions are confined to our feature branches and should be ignored when searching the history of changes to master.

What we really want is to only test the commits that were done directly in master, such as feature branch merges, and the various one-off changes we commit directly from time to time.

git rev-list

Here’s a new strategy: using some git rev-list magic, we’ll find the commits that only exist in feature branches and preemptively instruct git bisect to skip them:

for rev in $(git rev-list 75369f4a4c026772242368d870872562a3b693cb..master --merges --first-parent); do
  git rev-list $rev^2 --not $rev^
done | xargs git bisect skip

In short, the above chunk of bash script:

  1. Gets all revisions between the known-good revision and master, filtering only those that are merges and following only the first parent commit, and then for each commit
  2. Gets the list of revisions that only exist within the merged branch, and then
  3. Feeds these branch-only revisions to git bisect skip.

Pulling It Together

Here’s the complete list of commands we’re going to run:

$ git bisect start master 75369f4a4c026772242368d870872562a3b693cb

$ for rev in $(git rev-list 75369f4a4c026772242368d870872562a3b693cb..master --merges --first-parent); do
>   git rev-list $rev^2 --not $rev^
> done | xargs git bisect skip

$ git bisect run /tmp/timeit.sh

This runs for a while, and completes with the following chunk of output:

Bisecting: 14 revisions left to test after this (roughly 4 steps)
[086e45] Merged in update_rails_4_2 (pull request #903)
running /tmp/timeit.sh
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................
Finished in 1 minute 21.79 seconds (files took 6.63 seconds to load)
719 examples, 0 failures
Randomized with seed 54869

TOO SLOW

There are only 'skip'ped commits left to test.
The first bad commit could be any of:
342f9c65434bdeead74c25a038c5364512d6b67e
9b5395a9e1c225f8460f8dbb4922f52f9f1f5f1d
dcb1063e60dbcb352e9b284ace7c83e15faa93df
027ec5e59ca4c380adbd352b6e0b629e7b407270
1587aea093dffaac2cd655b3352f8739d7d482dc
2ff4dee35fd68b744f8f2fcd5451e05cb52bff87
73773eae4f6d283c3487d0a5aea0a605e25a8d3f
1cf615c6fa69e103aea3761feaf87e52f1565335
26d43d2060880cb2dbe07932fe4d073e3ccb7d44
293190779e33e26b9ceabfcff48021507591e9d1
77d504ee4b52b0869a543670cd9eb2fb42613301
3f25514f793e87549c9d64ddcfe87f580b29f37e
d43d1845b9fd6983ff323145f8e820e3aea52ebd
32a9e3c879546d202c27e85ab847ca9325977d5c
ea3e3760fb06e3141e5d12f054c1153e55b5cc67
9665813264a5e0d7489c43db871b87e319143220
b8f5106a8901d56621e72ba6b8bd44d4d5471dd2
086e45a2c0a2ed2cd26eeb48960c60048af87d0a
We cannot bisect more!
bisect run cannot continue any more

Hooray! We’ve found our offending commit: Merged in update_rails_4_2 (pull request #903). That makes sense—we upgraded RSpec and made a bunch of testing-related changes in that branch.

Furthermore, we see a list of skipped commits that git bisect didn’t test. This also makes sense—those commits are all within the update_rails_4_2 branch.

Conclusion

With a bit of git magic and some scripting, we’ve completely automated what could have been a very tedious exercise. Furthermore, thanks to the judicious use of git rev-list and git bisect skip, we’ve been able to cajole git into giving an answer that takes our branching strategy into account. Happy hacking!

Fixturies: The speed of fixtures and the maintainability of factories

 

We had a rails app. We used factories in our tests, and it took ten minutes to run them all.  That was too slow. (spoiler alert: by the end of this blog post, they will run in one minute.)

We suspected that we could speed up the test run time by using fixtures instead, but worried that fixtures would be much more difficult to maintain than our factories.

As it happens, we are not the first developers to deal with the issue that factories are slow and fixtures are hard to maintain.  I cannot explain the issue any better than the following folks do, so I’ll just give you some quotes:

“In a large system, calling one factory may silently create many associated records, which accumulates to make the whole test suite slow …”

“Maintaining fixtures of more complex records can be tedious. I recall working on an app where there was a record with dozens of attributes. Whenever a column would be added or changed in the schema, all fixtures needed to be changed by hand. Of course I only recalled this after a few test failures.”

“Factories can be used to create database records anywhere in your test suite. This makes them pretty flexible and allows you to keep your test data local to your tests. The drawback is that it makes them almost impossible to speed up in any significant way.”

In our case, 99% of our tests were using identical records.  For example, we were calling FactoryGirl.create(:user) hundreds of times, and every time, it was creating the exact same user.  That seemed silly.  It was great to use the factory, because it ensured that the user would always be up-to-date with the current state of our code and our database, but there was no reason for us to use it over and over in one test run.

So we wrote the gem fixturies to solve the problem this way:  Each time we run tests, just once at the beginning, we execute a bunch of factories to create many records in the database.  The fixturies gem then dumps the state of the database to fixtures files, and our tests run blazingly fast using those fixtures.

We saw a 10x improvement in run times, from ten minutes down to one.  We still use factories here and there in our tests when we need a record with specific attributes or when we want to clear out a whole table and see how something behaves with a certain set of records in the database.  But in the vast majority of cases, the general records set up in that single run at the beginning are good enough.

If you are using factories in your tests to re-create the same records over and over again, and your tests are running too slowly, give fixturies a try and let us know how it goes.  It only took us about half a day to refactor 700 tests to use fixturies instead of traditional factories, so there is a good chance it will be worth your time.

text-transform: An Unlikely Source of Jank

Here at Pedago, we take a hard look at the performance of our applications so that our users don’t have to experience any troublesome hiccups (or “jank”) that might otherwise sour a sweet learning experience.

While “performance” can cover a wide array of metrics, we tend to be extremely critical of browser overhead (script execution, rendering layout, and painting). While others have covered optimization of these metrics in great detail, we came across an unlikely jank-vector that we thought was worth mentioning.

When analyzing CSS performance in relation to browser lifecycle, there are a few notorious styles (eg: border-radius, box-shadow, transform, backface-visibility, etc) that tend to slow down frame rate. Some of these are obvious as they dramatically influence the rendering process or add additional calculations for stylistic ooomph. One might be extremely likely to overlook the rather mundane text-transform while focusing on that list, though.

We had several elements each containing a finite number of additional elements that all were performing CSS-dictated uppercasing on their text content. Now, this might not be a significantly intensive operation in itself, but combined with some excessively spastic scrolling, it degraded the user experience somewhat significantly. After we updated the content to be rendered in uppercase without the need for CSS text transformation, the improvement was obvious.

Here’s how things looked on a common mobile platform, prior to the change (FPS is the key metric, with 60FPS as an ideal target):

with CSS text transform

As you can see, we were barely hitting the 30FPS threshold and often even missing that window. Here’s what we observed after we removed the relevant text-transform styles:

no CSS text transform

As you can see, we’re now closer to consistently hitting that golden 60FPS benchmark! Granted, we were probably abusing a CSS style that was intended for narrower application, and the DOM of this particular page meant that there were a lot of them, so your mileage may certainly vary. However, this might help serve others in the war against jank!

How do I read the AngularJS message: [$rootScope:infdig] 10 $digest() iterations reached. Aborting! Watchers fired in the last 5 iterations

I’ve been using Angular every day for over a year, but have always been too intimidated by this error message—and the crazy list of information that comes along with it—to really dig into it and find out how to use it to my advantage.

Building a new product at Pedago, I see this error happen from time to time in production (possibly related to a user using an old browser), but never when developing locally. So I have the error message from our error logs, but I can’t reproduce it or debug it by making changes in the code.

After researching the issue, here’s what I found out on my own.

After the colon there are two brackets. (…’atchers fired in the last 5 iterations: [[{“msg’) The first bracket is the beginning of a json block. Copy from the first bracket to the end of the error and find a way to pretty-print that json. (Use your favorite code editor or an online json formatter.)

Now you have a pretty-printed array with 5 entries in it. Each entry represents an iteration in the digest cycle, i.e. one pass through all of the active watchers in your app, looking for changes. Angular will repeat iterations until it does one in which no watcher has a changed value, or until it hits 10 iterations, at which point it will error. That’s what happened in this case.

There were 10 iterations before the error, and 5 are included in the error message. Presumably that means there are 5 more iterations that happened earlier than what is included in the error message. The first entry in the error message is the 6th iteration, and the last entry in the message is the 10th iteration.

The entry for each iteration is also an array. In this case it is an array of objects, and each object represents a watcher whose value changed during this iteration. Each object will give you the text or the function that defines the watcher, the old value for the watcher before this iteration and the new value after this iteration.

Read it from top to bottom like a story, adding commentary based on what you know about your app. In my case, I was able to see how the changes in each iteration caused new watchers to be created, requiring yet another iteration. “In the 6th iteration, this watcher changed, causing this new stuff to be rendered on the page, creating new watchers which were assigned values in the 7th iteration, and then …” There was no infinite loop or anything. In fact, if Angular had been willing to do just 1 or 2 more iterations, it would have finished.

I hope this is helpful to anyone else experiencing this issue.

Five key principles that make geographically split software teams work

Geographically distributed teams can work. How? In my experience, there are five key principles that make all the difference.

The other day, I realized that I have worked on geographically split software teams for the last decade. I didn’t set out intending to do so. When I got my first job out of college as a software engineer, I thought being in the office was non-negotiable. If you ask company recruiters, most say in no uncertain terms they are not looking for remote employees.

But the reality on the ground is that many software companies end up letting most full-time engineers work flexible hours, from anywhere. Yet few companies acknowledge or advertise this explicitly. You can’t get this opportunity by asking for it in your interview, but after you are in the door, the opportunity arises. There are a few ways this happens.

Software companies with heavy on-call burdens, like Amazon for example, end up with an unspoken but widespread assumption that people can work from anywhere because they are working all the time. Since most engineers are contractually obligated to be on-call, and emergency issues arise at all hours of the day and night, these engineers start working remotely out of necessity. Soon, whole teams realize there is no barrier to working where and when they are most productive. On any given day during my tenure at Amazon, maybe half of my teammates weren’t in the office when I was.

 I’ve worked for several startups with surprisingly similar team environments. At one, people preferred to work at nearby coffeehouses with outdoor porches that made it easier to work while smoking. At another, unreliable wireless bandwidth in the office drove people to work from home out of necessity. At another, almost every person employed there lived in a different time zone, because this startup needed a very specific set of skills that happened to be scarce at the time.

Later I went to work for Rosetta Stone, which had three offices when I started, and grew to six offices when I left . Most of the teams I worked on ended up having people in at least three office locations, and as many time zones. Only one of the four bosses I had during those years worked in the same office as me. Not only were the teams geographically split, but once a team is split across two or more time zones, for all practical purposes, this team is also working flex time hours.

 These teams all worked well together. No matter where and when we worked, everyone still was accountable for meetings and deadlines. I never felt particularly disconnected from my team, and I had rapport with and support from my bosses.

Today I work for Pedago.com, a startup company that is geographically split and has been from the very beginning. It is the most productive team I have ever worked with.

Geographically distributed teams can work. How? In my experience, there are five key principles that make all the difference.

1. Shared chat room where all team communication, questions, updates, and banter happen

I’ve used Skype, IRC, and HipChat for this — there are many viable options. In a geographically split team, showing up to the chat room is the same as showing up to work. Conversely, not showing up to chat is like not showing up to the office, with the same professional and social side effects. Knowing everyone will be in one place means you always know where you can ask questions if you have them. And if you are ever away and need to catch up on team discussion, there’s a nice transcript of all the debates, conversations, and decisions the team made while you were away that you can easily read to catch up.

2. Shared “online” hours

These are the hours everyone is guaranteed to be available in the shared chat room. No matter how scattered the team is, there will always be some set of hours that work for everyone. Everyone is assumed to be available during these hours, and if any person can’t make it or needs to leave to run errands, they are expected to notify everyone.

3. Short daily video check-in meetings

If your team uses Scrum as its project management strategy, these are just your daily standup meetings. It makes a huge difference to see peoples’ faces once a day, if only to remember that they are human beings in addition to being demanding product owners or cantankerous engineers. The visual feedback you get from face to face conversations helps facilitate complex discussions, amplifies positive feedback, and the subconscious pressure to announce a socially acceptable level of progress helps hold everyone accountable.

4. Teammates need to be aggressive about helping and asking for help.

However tempting, no one should ever spin their wheels when stuck. Teams should declare that ad hoc pair programming, calls, and hangouts are a part of their team’s working style, and developers should be aggressive about picking up the phone or screen-sharing whenever they get stuck or need to ask a question.

5. Everyone on the team needs to buy into these rules.

No exceptions. If even one person refuses to get in the shared team chat-room or doesn’t feel like showing up to the video check-in meetings every day, these strategies won’t work for the rest of the team. Everyone on the team has to buy in, and in particular, managers and team leads need to lead by example, being disciplined about leading discussions and disseminating information only in the team’s shared forums.

But how do you know people are working if you can’t see them?

Simple answer. The same way you know they’re working when you can see them: you talk to them about requirements, check in on progress, look at what they deliver. If you are a technical manager, you monitor deployments or skim the latest check-ins. Software management really isn’t very different with distributed versus in-house teams. The secret to success is still the same: clear direction, well-defined, prioritized requirements, and carefully managed execution.