The other night I was talking to a co-worker about the difference between developers and sales people. My co-worker said, "I think developers and sales people are different by nature. Developers do their job for the love of coding, sales people work for the money."

I disagree. I think the difference is incentives and measurement. The output of a sales person is easy to measure. Because the output is easy to measure, we can pay them in accordance with production. Because a sales person making money is directly aligned with HubSpot making money, we actively seek sales people who are motivated by money. Greedy sales people are good sales people.

Measuring the output of a programmer is very difficult. So instead we find developers who love programming, pay them competitively, seduce them to fall in love with the company, and then hope their intrinsic motivation will produce good results. It works, more or less, but it could be a lot better.

I wonder myself, how much more could I do if my output was accurately measured? Would I spend less time dawdling over that email, and more time pumping out code? Would I spend less time over-engineering that theoretically perfect solution, and more time building stuff that delivered customer value? Would I spend less time playing with that shiny new development toy? Or would I spend more time relentlessly improving my developer environment to improve my own productivity?

After thinking some more, I started scheming out an actual system for paying developers based on performance. Here it is:

Defining a bounty for new features and improvements

The management team and/or the product team must write up a wish list and put a price on each item. An example list might read:

  • Build the minimal, working implementation of a survey application (a description would follow with five or ten bullet points defining the minimal implementation). $220,0000
  • Improve the success rate of trials from 10% to 15% (success rate being defined as contacting a sales person or logging in a second time). $75,000 per percentage point improved.
  • Improve the median load time performance of a given page from 1.5 seconds to .5 seconds. $10,000
  • Develop a new application that will be used by 500 users a week. $400,0000
  • Improve usage of a given application from 100 weekly users to 200 users a week. $50,0000
  • Reduce support calls for a given application from 60 calls a week to 30 calls a week. $1,000 for every 10 calls reduced.
  • Build small feature x. $1,000

Nailing down a set of requirements

The team of developers (2-4 developers) then pick a project to work on. It is key that the developers get to choose the project, otherwise the whole system breaks down.

After the dev team picks a project, the developers work with the product team to nail down a fuller specification. This spec includes various requirements, mockups, and wire frames. There would be some room for haggling over which features would be included.

The spec must leave room for iteration. Instead of reading, "Put this button exactly at 240px" it should read: "build this screen, and do up to two iterations of the UI".

Finally, the group will nail down the rough spec, and the developers will agree to deliver the product at the given price.

The spec should be broken up in a way that there are measurable deliverables presented within one month. The developer team should receive payment for meeting these deliverables.

Measuring results

For some bounties, the measurement would be straight forward. Measure how much the conversion rate increased, or support calls fell, and base pay on that.

The "build a new application" bounties are more difficult. The best way would likely be scoring matrix. The components would be:

10% usability testing. During the specification stage, the product owners would write a script for usability testing. If five users get through the script without needing assistance, the app scores a 10.

20% customer feedback. Before development begins, a set of beta customers should be found. These customers will grade the app when it is done. Developers should consult the customers as they build the product.

20% product owner grade. The original management team/product team sponsor of the app would grade the application based on its meeting or exceeding expectations.

20% Usage stats. During the specification stage, a usage target should be set ( 200 users within one month of launch). Based on that target the application will get a grade.

20% Defect density. Based on the number and severity of bug reports the application generates after launch, the score would go up or down.

Based on total feedback the developer team would receive anywhere from -30% to +30% of the original bounty.

In order for the application to be accepted, the app must meet the company's coding standards. A non-team member reviews the code. Backups, redundancy, and monitoring need to be in place. Unit test coverage must exceed 60%, etc.

Dividing the Bounty

The bounty is awarded to the team as a whole. But the team must divide up that bounty among its members. I can imagine two possible methods.

Method 1) the shares of the bounty are based on the developers current salaries. So if Alice makes $100K and Bob makes $80K, the shares of the bounty would be divided 55/45.

Method 2) before each sprint or iteration, the team collectively assigns hour estimates for each task. Developers then get credit based on the total hours of the tasks they complete. If the developer finishes the case faster than the original estimate, he gets credit for the original hours. If he is slower than the original estimate, then he gets credit for the hours worked. But if he goes over the original estimate the team lead may re-assign him to a different task. If extra, non-bug tasks, need to be added during the sprint, those tasks must get hour estimates.

How should the management team price features?

The Unscientific Method

Most product/management teams have some sort of road map. The road map lists various features or applications that need to be developed and allocates developer time toward those user stories. This road map can be turned into a price. Take a feature on the road map. Then ask yourself, would we still do this if the time it takes runs over by a month? Two months? Four months? Then add up the cost of the developers that would be assigned to work on it for that many number of months. That is your indifference price. But of course, management does not want to break even, it wants to make money. So knock 25% off of that price, and that is the bounty for the feature.

Based on usage

Most SAAS companies have numbers that link application usage to customer retention. Customers that use the app regularly rarely churn. Customers that do not find it valuable stop using the app and churn.

For each new part of the application, usage can be measured. That usage number can actually be turned into an expected impact on the churn rate, and a dollar value to the company.

Conversion Rate

A project to improve the setup or trial process could be measured in increased conversion rate or success rate. The management team knows how many leads are coming in, the costs of a customer, so can calculate how much an increase in the conversion rate is worth to the company.

Sales Team Demos

If your company uses sales people to sell its wares, you could measure how often each feature is shown off in a demo. If a new feature gets shown off by all the sales people, it's a smashing success.

Decreased Support Costs

Calls and emails to support kill you two ways. First is the direct cost of the support team salaries. Second is that for every call there is someone else who gets frustrated and stops using the feature. The first is easy to quantify the second is much harder.

 

Pitfalls

Confounding variables

For any performance based metrics, confounding variables are killer.

Let's say that the bounty offered to pay a team for every 5% it improved the conversion rate. Now imagine marketing starts a new campaign that drives tens of thousands of low quality leads. The conversion rate will plummet, but at no fault of the developers.

Conversely, if marketing starts a new campaign and drives much higher quality leads, conversion rate will rise, and the team may be unjustly rewarded.

If your measuring how much a project reduces support calls, then perhaps a person on another team might make a large mistake that drives calls up.

There are a couple ways to deal with this problem.

1) Choose measurement variables that have been predictable for at least several months. If the measurement variable is highly unpredictable, you are essentially rewarding developers based on the roll of a die.

2) Give developers complete control of the variable space. If you are measuring support calls, exclude calls on pieces of the product that other teams are currently overhauling. If you are measuring conversion rates, give the team actual control of the home page for the duration of the project.

3) Control for other factors. If there is a steady trend in the variable, then the bounty should be based on improvement over the existing trend.

Haggling over the application's grade

Once a coin operated culture takes hold, the pressure to game the system becomes intense. Sales seems straightforward, the number of customers you sold is a very hard metric. But even here there is a lot of room for haggling. What if the customer cancels? What if the sales person misleads the customer? What kind of end of month discounts are allowed? What happens if two sales people are on a call? Our CFO spends no small amount of time dealing with sales people fighting for that extra $500 in commission.

The problem is much worse for figuring out compensation for a new feature. If the feature is graded by other people at the company, developers may place a lot of pressure to get a good grade, and there could be ill will if the grade is poor.

Some of this can be mitigated by good communication. The developer and the product management team should be constantly talking. The developers should know where they stand at each iteration, and what the grade will be if they release at any point.

Losing on the iterative approach - dangers of coding to spec

In many cases the entire problem space is undefined. Let us take the case of an email application. If you are paying someone based on getting something out the door, then will code of the most basic application possible, they will code to the exact specification and not any more. But if you pay them a simple salary plus equity, and then expect them to do their best, the developers may have an idea that will make a far, far better application, even if it takes more time. They may spend the extra month to make it fast and ajaxy, because they are able to make trade-offs in his head and make the right decision about time allocation.

The consignment based coding will work best in two cases a) you have a lot of separation between your product team and development team, and so the development is mostly implementing, not designing. While perhaps not ideal, this is already the reality at many companies. Or b) there is a lot of trust and cooperation between the product team and the developers. The developer could say, "hey, we could do X, but we have to add to the price. We think it is really worth it, here is why. What do you think?"

And of course, bounty programming based on achieving metrics (like usage or conversion) rate, will allow far more developer initiative and iteration as the developers will try a number of approaches without having to cycle through a planning and specification stage.

Developer Risk

Developers assume much more risk with bounty based pay. What if no one uses the app simply because it was a bad idea? What if it takes far more hours to produce than expected? Or if it is simply impossible to produce at all?

One group of developers might make a valiant effort redoing an app to improve conversions but to no avail. They might end up making half the salary of other developers despite their efforts. Other salaries might make a few simple changes and blow out their numbers. Developer pay has the potential to become very erratic.

To mitigate this risk, developers must be able to pick and choose their user stories. Developers must work with the product team to develop the spec. If one aspect of the spec has a great deal of technical risk, the developer might force the product manager to price it separately, and then choose whether or not to implement that add-on.

Overall, the development team has incentive not to make features which are a really bad idea, or a that will end up in a twisted rat hole. That's a good thing.

Unintended Benefits

Forcing management to price improvements

When I first thought of this plan, I worried about how much work it would take to figure out a value to the company for each possible feature. But then I realized that requiring a deeper analysis was a feature, not a bug. In the early stages of a startup, a product team needs to work furiously and throw a lot of things against the wall and hope that something sticks. But in the middle stages the company has a lot of data about customer wants and needs, and the major bottlenecks hindering growth. Spending the effort to systematically judge the relative value of all possible initiatives is hugely valuable when directing a several million dollar engineering budget.

 

Forcing developers to work closely with product management and customers

One initial worry was was that animosity could grow between the developers and the product managers, since the product team's grade of the app will determine the dev team's salary. But a side benefit of it is that will force developers to much more closely with the product managers. For this plan to work, developers should be in contact daily with the product team, and always know where the current version of the app stands with them. Diving under water for a month and coming up with an app would be a recipe for disaster.

 

Thoughts? Comments? Tomatoes?

So what do people think? Has anyone tried a system like this? Can the impossible be done? Can a dollar a value be placed on developer productivity? Or would this plan collapse into a rancorous mess. Please offer your thoughts.

so-hiring

Recommended Articles

Join our subscribers

Sign up here and we'll keep you updated on the latest in product, UX, and engineering from HubSpot.

Subscribe to the newsletter