Good and Bad Password Strength Checkers

I’ve found only two good online password strength checkers, Passfault and Kaspersky Secure Password Check. The others I’ve tried are pretty bad. Even with those two decent checkers, you’ll be better off if you use randomly generated passwords instead of trying to think up good ones.

In the recently reported Yahoo security breach, lots and lots of passwords were stolen, in their hashed form. It turns out the passwords had been stolen two years earlier.

And now people keep spreading the same debunked advice on how to make “unbreakable” passwords. They claim that as long as you mix upper and lower case, digits, and punctuation, you’re safe. This has been debunked and satirized, but people keep spreading this advice.

Test Your Old Yahoo Password

Check your former Yahoo password (the one you’ve changed since the breach was reported, right?):

  1. Go to Passfault.
  2. Type your old Yahoo password but don’t hit Enter or click Analyze yet.
  3. Click Show Options.
  4. Under Cracking Hardware, pick “Government cracker ($500,000 machine).” Yahoo reports that a foreign power did this, so we’ll pick this option to try to simulate that situation.
  5. Under Password Protection, pick “Unix BCrypt Hash,” which is what Yahoo uses.
  6. Click Analyze. The site will estimate how long it would take to crack your password.

If the result is less than two years, the attackers have had plenty of time to crack your password.

If the result is in centuries, the attackers have had only a small chance of success against your password.

You can test your old Yahoo password against the Kaspersky Secure Password Check as well, but note that Kaspersky assumes the use of an average home computer – an attack by amateurs instead of a government with lots of equipment and expertise available.

The Problems With Weak Strength Checkers

The ancient password wisdom about password complexity is a case of misapplied math. The thinking was that if you drew from a larger character set, the attacker’s job was harder. They’d invoke the math that if you had L characters in your password, drawn from a character set of N characters, the attacker would have to try N^L (N to the Lth power) possible combinations in order to get your password. They point out that 8 characters drawn from the whole North American keyboard (95^8) gives far, far more possibilities than 8 lower case letters (26^8).

What’s wrong with that formula? It’s the wrong mathematical model for what really happens.

It applies only to randomly generated passwords, not human-chosen passwords. Very few people use randomly generated passwords. By the way, claiming that “nobody would ever guess THIS password” is not the same as a randomly generated password. A password manager or an online password generator can give you randomly generated passwords that are far stronger than anything you’d make up on your own.

The usual misguided usage of that formula assumes that if you’ve used ANY characters of a particular type (upper case, lower case, digits, or punctuation), that’s as good as using ALL of them. For example, the flawed GRC strength checker assumes that as soon as you change “password” to “Password”, you’ve gone from 26^8 to 52^8. If you also add a digit to the end, GRC claims you’ve now created 62^9 possibilities. How on earth does capitalizing one character and adding one digit create that much improvement? It doesn’t. “Password1” could be cracked within a fraction of a second. The formula 62^9 would be relevant only if you used a rich, random mix of upper case, lower case, and digits.

Put yourself in the attacker’s shoes. The attacker already knows the most common passwords. They know they’ll get a lot of passwords very quickly if they try those. A mere 50 guesses will catch a lot of passwords. If they try a “dictionary attack” of the most common 10,000 passwords, for example, they’ll crack a lot of passwords quickly. Of the remaining uncracked passwords, the attacker knows that if you’re required to include at least one capital letter, it’s probably the first character, and if you’re required to include at least one digit, it’s probably the last. Therefore, if they brute-force their way through the pattern ULLLLLLD (one upper case, six lower case, one digit), their next (26^7)*10 guesses will catch a lot of passwords. The weak strength checkers would have you believe that the attackers have to go through 62^8 possibilities, but it turns out that searching only a small fraction of those possibilities will have a big payoff. The attacker could eventually be faced with 62^8 guesses, but not until they’ve already picked a lot of the low-hanging fruit. They might not even bother with the full workload of guessing all possible passwords.

What’s the Answer?

Use a password manager that will generate long, random passwords for you – a different password for every site you care about. You’ll need to remember your master password for the password manager, but not all the individual passwords.

Use two-factor authentication wherever it’s available. That way, even if your password is guessed or stolen, the bad guys still don’t have enough info to log in as you. If a site doesn’t offer two-factor authentication, using a password manager with long, random passwords is still a lot safer than self-chosen passwords.

Avoid using the flawed password strength checkers that make really bad assumptions about how password attacks work. You can use  Passfault and Kaspersky Secure Password Check for some realistic assessments, but you won’t need them if you use a password manager and two-factor authentication.

Jim Becker


The Problem With Security Questions

The recent IRS breach (Hackers stole personal information from 104,000 taxpayers, IRS says) succeeded because the attackers knew how to answer the “security questions” for a lot of people. The attackers had a phenomenal 50% success rate, roughly 100,000 successes out of about 200,000 attempts. As attacks go, that’s hugely successful.

The incident highlights two important trends:

  • Security questions stink. The attack on the IRS succeeded because attackers were able to get the right answers on half of their targets.
  • Computers keep getting more powerful, but we don’t get any better at remembering passwords or security answers. The attack succeeded because the attackers had the resources for collecting and tracking individual security answers.

What’s wrong with most security questions?

You get the questions that are easy for someone to figure out. Your mother’s maiden name. Your birthday. Your favorite sport, movie, book, or song. These are easy because a lot of people state these things on their social networking sites, or in online profiles. This is how Sarah Palin’s email got hacked in 2008.

You get the questions that are hard for you to remember. Did you enter your city of birth as Washington, Washington DC, or Washington, D.C.? Was my “first car” the first one I drove or the first one I bought?

Some security questions have answers that might change over time. Is your favorite movie today the same as it was when you first provided answers to your security questions?

Password advice usually tells you not to pick a name, date, or place that’s easily associated with you, and yet that’s exactly what the “security” questions are asking you to provide.

When you’re stuck with setting up security questions, try these ideas:

  • Use a password manager, and have it generate and track random answers. What’s your mother’s maiden name? 8=z<BzDb’wvd{J(~
  • Pick what your answer would have been in 2000, before there was Facebook, or in 1990, before the Internet became popular – unless it’s the same as today’s answer.
  • Add a standard bit of nonsense to the end of every answer. What’s your favorite color? blue biscuit. What was the first car you drove? Nova biscuit. Just don’t tell anybody what your nonsense add-on is, and make sure you’ll remember it.

On top of security questions, take advantage of two-factor authentication (also known as two-step verification) wherever possible. Many banks, email systems, gaming sites, and other online services offer this now. If your password or security questions are ever guessed or stolen, you’ll still have a layer of safety for stopping attackers.


Rule of Thumb for Password Length: Add 1 Character Every 3 Years, Just to Keep Up

Here’s a rule of thumb: You need to make your passwords one character longer every three years, just to keep up with the attackers. The eight-character password that might have been good enough six years ago should be ten characters long now, to have the same strength.

Why is this? Computers keep getting faster and attack methods keep getting smarter. If your password doesn’t keep growing, it keeps getting more and more vulnerable to attack.

Why one character every three years?

First, let’s look at the attackers. As a rule of thumb, I’m applying Moore’s Law to computer power, and therefore to attack strength for automated password guessers. Attack strength doubles every couple of years. Doubling every two years, for those who don’t do binary powers, is like adding one “bit” of attack strength every two years. Let’s call it half a bit per year.

Second, let’s look at the defending side – your password’s length. Despite the frequently regurgitated advice of adding punctuation to your passwords, password length matters a lot more than the use of special characters. (Guess what: The Bad Guys have heard about using special characters too.) A rule of thumb from the NIST Computer Security Resource Center estimates that once you get past eight characters, each additional character adds 1.5 bits of entropy (password strength).

Now we compare the two trends: adding 0.5 bits of attack strength per year vs. adding 1.5 bits of defensive strength with every added character. In other words, one extra character of password length makes up for three years of faster computers. And there’s our rule of thumb: Make your passwords one character longer every three years.

Why Stop There?

There’s no need to stop there. As I’ll explain below, this rule of thumb isn’t perfect. Consider it a minimum standard. If you can come up with a much longer password you can remember, go for it. One way to do that: Think of a line from a song you’ll remember, and use the first two characters of each word. “Come on, Baby, let’s do The Twist” becomes “CoonBaledoThTw” – a nice fourteen-character password. Or you can throw in the punctuation if you want to pad it out some more: “Coon,Ba,ledoThTw.”

Got Quibbles?

One could easily argue that Moore’s Law doesn’t still hold, or that it doesn’t apply to password attack capabilities. Yup, that’s why I’m calling it a rule of thumb. The point is that computers keep getting faster, so your password needs to keep getting stronger to have the same survival chances it did a few years ago.

One could easily argue that the NIST rule of thumb for password entropy is invalid. In fact, there was some research a few years ago (“Testing metrics for password creation policies by attacking large sets of revealed passwords“) showing that NIST’s rule of thumb wasn’t accurate. Again, I’m offering my one-character-every-three-years rule of thumb as rough guidance. If you’re not doing at least that much, your password keeps getting weaker with each passing year.

You could also argue that lockout policies (temporarily locking your account after a certain number of bad guesses) make a lot of this go away. You’re right, if the attacker is trying to log in directly. (This is called an in-band attack.) However, if the attacker has acquired the password hashes, the lockout policies don’t apply. The attacker can make many millions of guesses per second.

But Every Other Article Says I Need Special Characters

The xkcd comic strip on password strength explains the problem well. Complexity rules are often harder on you than on an attacker.

Many years ago, somebody computed (correctly) that a larger character set, like the 94 characters found on a US computer keyboard, yields a lot more possible passwords. For example, eight lower-case letters give you almost 209 billion possible passwords (26^8), whereas eight characters from the full keyboard give you about 6 quadrillion possible passwords. That’s about 29,000 times more possibilities, so – the thinking went – that password must be 29,000 times stronger. The problem is that the calculation applies only to randomly generated passwords. Most people don’t pick random passwords. If you tell them they can’t use “password,” chances are they’ll pick something like “Pa$$w0rd1,” and that’s pretty easy for an attacker to get. The Bad Guys have known about those substitution tricks for many years.

Longer Is Stronger

The best way to make your password stronger is to make it longer.

I offer up my rule of thumb to add at least one character every three years as a minimum standard, just to keep up.




“Is Dead” Is Dead: That’s the Headline I Want to See

It’s as if you can’t write about IT unless you regularly declare that something is dead. These are from just the last seven days:

It’s FUD. It’s the self-appointed cool kids in school who say “Whatever I’m doing is cool, and whatever you’re doing isn’t.” It’s the shallow, simple-minded thinking that says, “I’m so good I’ve got the answer and I don’t even need to hear your question.”

Cut it out, IT writers. Try to get some imagination, and give me insights and information instead of declarations that whatever I might be doing now is so five minutes ago. If all these things were really dead, it would be obvious and you wouldn’t need to keep declaring it.


Why Projects Are Late (It’s in the probabilities)

The book Gödel, Escher, Bach: An Eternal Golden Braid (Douglas Hofstadter, 1979) offered up Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”

Back in the 19th century, Vierordt’s Law claimed, roughly, that the longer a task will take, the more likely it is to be underestimated.

Why are projects so often late? I heard a PMP instructor – a seasoned project manager and mentor to others – say that despite years of project management experience, a project that’s only a few weeks late instead of months late is still a rare thrill.

In my experience, two factors account for an awful lot of project lateness, but – strangely – almost nobody mentions one of them. One is that people underestimate. The other is in the math of probability.


The commonly cited factor for project delays is the strong tendency to underestimate.

Research from 2007 (“Bias in memory predicts bias in estimation of future task duration“) found that “underestimation is most likely when a task is well learned.” That seems counter-intuitive, because you’d think that the better you know a task, the better you’d be at estimating the time needed. The study, however, found that people tend to underestimate how long something took in the past, and that makes them underestimate how long it’ll take in the future.

An effect I’ve observed is that people often estimate task length as if nothing will go wrong. To me, this is like my GPS unit that never accounts for traffic lights in its time estimates. Estimating as if everything will happen without delay means your estimates are probably low.

Similarly, there’s what I call the “MapQuest effect.” (Or fill in your favorite mapping tool.) This means estimating the main activity while ignoring all the lesser activities that go with it. If I get driving directions from my suburban home to a particular downtown DC office building, MapQuest tells me it’ll take 30 minutes. If I want to be at a meeting that starts at 2:00, what’s my chance for success if I plan to leave home at 1:30? Non-existent. Even without traffic problems, I’ve left out any possibility of last-minute delays in leaving the house, and I’ve left out the time needed to find parking, walk to the office building, reach the reception desk, and get to the meeting space.

Another underestimating effect I’ve observed is that people feel pressured into promising more than they can deliver. Even though “fast” and “good” are often not the same thing, people often feel compelled to give short estimates to show they can handle a task quickly. It’s almost as if they’re admitting to a shortcoming if they don’t make their estimate as short as possible, so they go with the shortest possible estimate instead of something more realistic.


The other factor, which hardly anybody considers, is simple probability.

Many people figure that to estimate the overall project duration, you just add up the tasks that make it up (along the critical path), and there’s your estimate. Turns out, the probabilities are against you.

Here’s a thought experiment. First, what would you say your on-time percentage is? That is, what percentage of your tasks are completed within the time you estimated? I’m talking about the task level, not the overall project completion estimate – the pieces you’d estimate individually before you come up with the grand timeline.

I’d say that most people are no higher than 50% at finishing tasks within their estimates. Unfortunately, if your ability to estimate task length is weak, then your ability to estimate your on-time percentage is probably weak too. Darn.

But anyway, suppose you credit yourself with a 90% on-time rate. Nine times out of ten, you complete a task within the time you thought it would take.

For the second step of this thought experiment: How many tasks would you typically string together sequentially as part of an overall effort? This might be the number of tasks on the critical path of a project plan, or it might be your sequence of activities trying to get out the door before a vacation. Suppose you pick eight steps for this exercise.

So now you’ve got eight steps, and you’re 90% likely to be on time for each of those steps. Maybe you estimated two hours each for the first four, and four hours each for the last four. That’s a total of 24 hours. What’s the probability the whole effort will need 24 hours of effort?

In other words, what’s the probability of hitting 90% eight times in a row? You can calculate that by pasting this formula into a Google search box: 0.9^8. The answer is 43%. That’s worse than 50/50.

In other words, even if you’re pretty good at estimating each of the eight tasks, this project is likely to be late.

Adding up the task estimates to find the total estimate is a bad bet.

What’s the Fix?

There’s an old tongue-in-cheek rule of thumb that says you should double the number and increase the units (24 hours becoming 48 days, for example). But that’s not very realistic or helpful.

PERT Three-Point Estimate

A commonly offered fix is the PERT three-point estimation technique: (optimistic + 4*likely + pessimistic) / 6. You’re making three estimates: the optimistic estimate for when nothing goes wrong, the pessimistic estimate for when a lot goes wrong, and the estimate you consider most likely.

As an example, say you’ve originally estimated a task at 2 hours, assuming everything goes pretty smoothly. That’s now your optimistic guess. Maybe your pessimistic guess is that you could spend an extra hour and a half (3.5 hours total) if you find a surprise need for a trip to the store before you’re ready to go. Maybe your likely estimate is that you fudge in an extra half hour on top of your optimistic estimate for things like looking for lost keys and herding the cats (your passengers). The PERT estimate is therefore (2 + 4*2.5 + 3.5)/6, or about 2.6 hours.

At its best, the PERT estimate is an intelligent fudge factor to keep you from always using optimistic estimates. It gets you to think about what could go wrong, or what could delay you.

At its worst, the PERT estimate won’t be very good either. If you’re no good at making one estimate for a task, you might be no good at making three estimates for the task. The optimistic estimate might be hard enough to come up with, but it’s harder still to estimate what’s likely, and it’s hard to know how pessimistic you should be in your pessimistic estimate.

There’s still an element of probability in the PERT method. The task could still take longer than the PERT estimate suggested. The best we can hope for is that the PERT estimate is more realistic than our originally optimistic estimate. It’ll never be a guarantee.

Task Data

Another possible fix is to track your actual time, so you can use historical data in your future estimates. This could get you past the experience bias that makes you underestimate past tasks.

That’s great if you can collect enough accurate data, and if your future efforts are comparable to your past efforts.

What if entering all that data is too tedious to do in a consistent and timely manner? Someone who enters their task data long after the fact might not be very accurate.

What if it’s hard to say how much time you really spent on a given task? If you operate in an environment that features lots of interruptions and distractions, knowing that you started a task at 9am one day and finished at 5pm the next day leaves a lot of leeway in estimating how much time you spent on one particular effort.

In other words, data’s great if you’ve got it, but it might not be complete, accurate, or relevant for the task you’re estimating now.

The Probability Fudge

If you’ve got a decent idea of your on-time rate, you could fudge your estimates up by dividing by your on-time percentage.

If you’re on time 80% of the time, make your estimate, then divide it by 0.8. Your estimate of 24 hours becomes 24/0.8 = 30 hours. If you’re on time 50% of the time, divide your estimate by 0.5. For those who don’t breathe math, that’s the same as doubling it. If your on-time rate is 50%, your estimate of 24 hours becomes 48 hours.

This too is a fudge factor, but it might help steer you to more realistic estimates.

My Best Advice: Set Expectations, and Update Them

Given that no method is perfect at fixing squishy time estimates, do you give up in despair? Do you let the delays happen, then whine about how it’s not your fault? Naaaah.

First, set realistic expectations for yourself. Recognize that your estimates won’t be perfect. Some tasks will go past their estimated durations.

Set realistic expectations for whoever will see the estimates. Talk about probabilities with them, even if you don’t have enough to data to gauge those probabilities precisely.

Importantly, update the expectations. “See it, say it” as they say in the DC subway system. If you see things heading toward delay, you’re generally better off by being up-front about it with whoever needs to know instead of waiting until the deadline to announce you’re going to be late.

Remember those eight tasks from our hypothetical example? At first, you had to get your 90% on-time rate eight times in a row. Finish one off, and now you need to hit 90% only seven times in a row. Then six, then five, and so on. Your accuracy in the overall project deadline should go up as you finish off the tasks that make it up.

Happy estimating!


Contacts Help: Lessons Learned from a Job Search

I’ll be starting my new job soon at the Space Telescope Science Institute. During my job search, I kept data and took notes, so here are my main lessons learned.

Contacts Help

Contacts improved my chances for an interview by a factor of about 10. I applied for lots of positions. At companies where I had a contact, or where a contact could introduce me to someone, almost half of my applications turned into interviews. At companies where I couldn’t come up with a contact, less than 1 in 20 turned into interviews. That’s about a 10-to-1 difference: 10 times more likely to get an interview if I have a contact than if I don’t.

Contacts were no guarantee that I’d get an interview, but they improved my chances by a lot.

It was still worthwhile to apply to places where I didn’t have a contact, because some of those turned into interviews anyway. My odds were lower, but one success would have been enough.

It’s easy to see why contacts would be so helpful in getting an interview. When every job opening gets hundreds of applications, the thing that’s going to differentiate you is the person who tells the hiring manager, “You ought to talk to this person.” That is what makes your application stand out from the rest.

Yep, LinkedIn

LinkedIn was my tool of choice for finding contacts at prospective employers. For every job of interest, I looked up the company on LinkedIn to check for first-degree connections (my own connections) and second-degree connections (connections of my connections). For second-degree connections, I’d ask my intermediate connections for an intro.

I paid for the Job Seeker Basic level, mostly because it let me flag my LinkedIn profile as a job searcher. I can’t say for sure it helped. Maybe it helped colleagues see that I was looking for employment, but I don’t have any evidence that it helped (and no evidence that it hurt, either). I’ve returned to the free membership now that I’ve found a job.

Use All Your Search Resources

Most of the jobs of interest came from automated job searches and job sites. It was great when my contacts tipped me off about a job, but there were only so many of those. (A colleague tipped me off about the job I got, but there were lots of potentially interesting jobs out there that nobody had tipped me off about.) For me, LinkedIn, TheLadders, and were the main automated searches that turned up jobs of interest. Together, they accounted for almost half of the jobs I applied for.

TheLadders costs money, so one might wonder whether it was worthwhile. For me, it helped find job openings, but that was about it. Arguably, TheLadders is supposed to bring you to the attention of recruiters who know you’re serious, because after all you’re paying for the service. That didn’t seem to happen for me, and you’re more likely anyway to get a job through your network than through recruiters. Joining TheLadders might have been a bad bet, but my view was to line up a good range of job search sources.

Jobs from other online sources were, individually, smaller contributors, but in aggregate they still accounted for another third of my job applications. These other sites included, Dice, CareerBuilder, SimplyHiredNonProfitTimes, and the job sites of various trade periodicals. In each case, I set up an automated search to send me jobs.

The rest of my jobs of interest, roughly one-fifth, came from contacts who tipped me off about job openings, at their own company or elsewhere. This was the smallest group, but it’s the one that paid off in the end.

It’s Gonna Take a While

Expect your job search to take a while. (Welcome to 21st-century America.)

The Urban Institute says that as of June 2013, “long-term unemployment remains at record high levels,” with more than a third of unemployed workers out of work for six months or longer.

The New York Times reported that “The average unemployed 55- to 64-year-old who got a job last month [June 2013] had been out of work for more than 11 months, versus 6 months for the average 20- to 24-year-old.” The article quotes an economist from the Bureau of Labor Statistics who says “the older you are, the longer it takes” to find a job.

That old advice to save up to six months’ worth of income in case you lose your job seems insufficient now. Fortunately, my wife remained employed and we had savings.

CRM Lite

I couldn’t have done this search without keeping track of where I applied, what my status was, and where I needed to do follow-up. Essentially, this was a job for a personal CRM tool (Customer Relationship Management).

There are tools out there, like Salesforce, that do this on a corporate level. I even tried the minimum participation level at Salesforce. It worked, but I suspect it’s still overkill for someone who’s just doing CRM for one.

On my iPhone, I used Contacts Journal. It helped me track my interactions with various contacts, and keep to-do lists for follow-ups.

I also kept a spreadsheet listing every job I applied for. I included the job title, the company, URLs for the company or job listing, contact info, and a status description.

All of these tools required that I keep my data up to date. For me, the effort was worthwhile, because I couldn’t have memorized all that.

Good Luck!

If you’re out there searching, I wish you good luck! (Oh, alright, I’ll wish you good luck even if you’re not currently searching.)


LinkedIn Endorsements and Anglo-Saxon Compurgators

Confessions of a history buff…

The usual complaints I hear about LinkedIn endorsements are that people endorse you for things they haven’t observed themselves, or they endorse you for skills suggested by LinkedIn, when you’ve made no claim to those skills.

For my part, I’m grateful for the endorsements on my LinkedIn profile.

The endorsements feature reminds me of the Anglo-Saxon idea of compurgators. Certain civil and criminal matters were resolved by setting the number of “compurgators” you’d need to produce. These were people who’d vouch for your side of the story. They might just vouch for whether they believed you, even if they didn’t witness the events in question. If you could line up enough people to stand by your side, your story became more persuasive. If you couldn’t get enough people to step forward on your behalf, suspicions were aroused about your honesty and innocence.

Was the compurgation system open to abuse? Certainly. A guilty person who was popular or persuasive might still fetch enough compurgators. An innocent but unpopular person might have trouble finding anyone to help out. Yup, it was imperfect (unlike our modern justice system, which is 100% perfect, right?). But all in all, the compurgation system was a crowdsourced system of justice, in which you achieved validity by getting the crowd’s support.

And that’s pretty much what the LinkedIn endorsements are. Maybe your contacts are vouching for your skills, or maybe they’re vouching for you. People speaking up for each other and trying to help each other are good things, despite an imperfect system.

But now I’ve gone and revealed myself as someone who reads history for fun.


The Improvement Paradox

One of the “tells” I look for among colleagues in the workplace is a person’s eagerness to improve: improving their skills and knowledge, improving the products and services they deliver, and improving the processes they follow. The paradox is that the best workers have the most focus on improvement, but the ones who need it the most usually have the least interest in improving.

Why would the ones who need it most want it least, and the ones who need it least want it most? In both cases, it’s how they got that way. The best became the best by working at it, by frequently looking for ways to improve. The worst got complacent, vane, or passive at some point, so they’ve seen no need to work on improving.

The Time Element

The time element makes the difference. The improvers have an eye on the future. They know that no matter what they’re offering now, it’ll become stale over time. Subject matter expertise is great, but only if you keep up with the subject matter. Product expertise is a great thing, but someday that product will become irrelevant. Your current services and processes might be great, but they won’t stay great unless they keep up with evolving circumstances.

The non-improvers don’t have an eye on the future. They have an eye on some moment in the past. Some years ago, they learned one use case or procedure, or one programming language, operating system, or application, and got stuck. Call them “point-in-time Luddites” – fixated on some past state, and unwilling to go beyond it.


This improvement tell shows up when you look at lifers – people who’ve been with the company for many years, and who want to stay for many more. The difference between the best lifers and the worst is their focus on improvement, or lack thereof. The best lifers keep updating their skills, adapting their practices, and staying in touch with what’s going on in the organization and their field. They’re always on the lookout for a better way. Their understanding of the organization is wide and deep, and therefore highly valuable. The worst lifers latched onto a technology or practice years ago, and they’ve resisted doing anything different ever since. Their understanding of the organization and their field becomes increasingly narrow and shallow as they resist anything that smacks of change.


The improvement tell shows up when opportunities for promotions arise. I’ve always viewed promotions more as rewards than incentives – rewards for those who’ve extended themselves (improvers), not incentives for those who haven’t extended themselves (non-improvers). I’ve known employees who refused to work toward any sort of improvement unless they got a promotion first. They’re the non-improvers. I suppose you could say they’ve done me a small favor: when someone refuses to go above and beyond without a promotion, my list of candidates for promotion has gotten shorter by one.

Lessons Learned

The improvement tell shows up in lessons-learned exercises. My standard lessons-learned agenda goes like this:

  1. Review of the facts: Let’s make sure everyone understands the facts of the situation.
  2. What went well: What should we do the same way the next time? What worked? What’s most important to keep just the way it is? Paradoxically perhaps, the improvers are best at noting what doesn’t need improving. Effective improvement means being selective, recognizing where it’s not needed. Otherwise, a focus on improvement becomes pathological when you think everything needs fixing equally.Non-improvers tend to start with an assumption that whatever they were doing, that’s what went well. They’re less open to the idea that they’d need to change anything, so their assessments of what’s truly worth preserving will be distorted.

    By the way, in my lessons-learned exercises, I make a point of putting “What went well” before “What needs improvement” because it’s all too easy for a lessons-learned exercise to devolve into griping and finger-pointing. Starting with the positives generally puts people in a less complaining mood, and it softens the blow if there are some difficult improvements to discuss.

  3. What needs improvement: What should we do differently next time? What didn’t work as well as it should have? What’s the biggest improvement we can make for next time? Improvers are the better contributors here too, for the obvious reason that they care about making improvements, but also because they’re more collaborative about change. If the non-improvers see any room for improvement, it’ll be in someone else’s work. That changes the dynamic from “what can we do” to “I’m fine, but let’s talk about what’s wrong with you” – blame games instead of constructive improvement.

Improvers at Any Level

The improvement tell shows up at all skill levels and all staff levels. The seasoned pro who got to a certain level and then stopped has turned into a non-improver. The junior staff member who has little in the way of skills and knowledge now, but who brings a fresh perspective and who wants to develop skills and knowledge, is a valuable improver. Maybe today, the seasoned pro still fills an important role, and the junior staff person isn’t a major contributor. Over time, however, the junior improver will become more valuable while the senior non-improver will become less valuable.

Certainly, it can go the other way too – the seasoned pro who never goes stale, or the junior staff person who rarely makes an effort to do better. That’s the point: Any skill level can be improvement-focused or not.

I Like the Improvers

For these reasons and others, the colleagues I value most are the ones who have a healthy focus on making improvements.

Is This Project Worth Doing? The Fit Matrix

I have a tool I use as a quick first check on any effort that someone is proposing. I call it the Fit Matrix.

Someone wants to see a new service, or a new piece of software. Maybe someone wants to bring in a vendor for an overview of a product line. Or the question might be whether we should keep doing something we’re already doing.

I’m writing this from an IT management perspective, but the tool is easily generalized for other areas.

The Fit Matrix is a two-by-two matrix:

Business Fit
Low High
IT Fit High Solution Looking for a Problem Do It!
Low Don’t Do It! Problem Looking for a Solution

It’s intended for a quick assessment. It’s not a substitute for a feasibility study, a cost analysis, a risk analysis, or strategic planning. You’d use this as a quick look before you get to those more involved assessments.

Business Fit: Does it serve the organization’s directions?

Business Fit looks outside the IT group to match up the work with what the organization as a whole is up to. You’ll rate Business Fit as High or Low. Neither assessment guarantees that the work will or won’t be pursued. You’re just stacking it up against the organization’s known directions.

The proposed work has High Business Fit if:

  • It directly supports a documented organizational goal.
  • It directly supports an important business process within the organization.
  • There’s a target audience that is likely to want something along these lines.

Otherwise, the proposed work has Low Business Fit. It doesn’t support documented organizational goals or important processes, or we don’t really have an audience for it.

What about a gray middle? Maybe there’s a request people often make, but nothing in the organization’s plans call for it. I’d usually call that a Low Business Fit. For Business Fit, we’re not (yet) asking whether the idea would be popular. We’re asking whether we can match it up with known organizational directions.

IT Fit: Does it fit with IT’s directions?

This is the internal view – how well the work fits in with the current or planned IT environment. Just like Business Fit, IT Fit will be judged as High or Low, and neither assessment guarantees what we’ll do with it.

The proposed work has High IT Fit if:

  • We have the technology to do it, or we’re already planning to acquire the technology.
  • We have the know-how, or we’re planning to get it. For this purpose, I’m not making a distinction between insourced work, outsourced work, or other approaches. The question is whether we do or don’t have a reasonable expectation that the necessary expertise will be in place, wherever and however we might come by it.
  • We expect it will integrate well with the existing or planned IT environment.

Otherwise, the work has Low IT Fit. We don’t have the technology, we can’t get the expertise, or we don’t see how it’ll work well with our intended environment. We’re not ready, not willing, or not able to do it.

The Four Outcomes, and What to Do With Them

Do It! (High Business Fit, High IT Fit): This is the ideal outcome. The organization needs it, and you can deliver it. You’ve got a case for taking a closer look. You’d still need to do appropriate levels of investigation, planning, and prioritization, but for now, you can say it’s a good candidate for such consideration.

Problem Looking for a Solution (High Business Fit, Low IT Fit): This is important, because the organization has a need, but somehow the idea in question is not a good fit with what you’re already doing. Either you need a better idea that will fit your IT directions, or you need to modify your IT directions to accommodate the solution. Or you don’t need a new solution because the current solutions are fine (for now).

Solution Looking for a Problem (Low Business Fit, High IT Fit): The idea seems cool, from a technology perspective, but you can’t tie it to organizational priorities. A Solution Looking for a Problem is less important than a Problem Looking for a Solution. Either you reject the idea altogether, or you allow it as an experimental side project with the hope that it may yet prove useful, at least as a learning exercise. But you don’t throw lots of resources at it.

Don’t Do It! (Low Business Fit, Low IT Fit): This idea isn’t a good fit for the organization or for IT. It’s not a good candidate for further pursuit, even as an experimental project. This situation will come up, and it’s not a personal failure for the person who suggested it. A good-faith suggestion that turns out not to be a good fit is still a Good Thing. You now have the opportunity to increase awareness of what is or isn’t a good fit, and you’ve handled the suggestion before much time was spent on it. If you want to send the message that you welcome suggestions, treat the person well even though the idea was rejected this time. Next time, they’ll be a little smarter about what’s going to fly.

What if you don’t have documented directions for the organization, or for IT?

You might find yourself in an organization that has no clear, documented directions. You don’t know what to match against. Or you might have inherited an IT group that hasn’t spelled out its own directions, or the directions have little to do with what the organization as a whole wants.

Make your best guesses, and use the Fit Matrix anyway. Give people something to react to. Maybe you’ll shake something out of the tree if you describe why you think something is or isn’t a good fit. You have a chance to learn something, and get greater clarity about the needed directions.

Foster Awareness

The Fit Matrix gives you your elevator pitch for any work you’ve assessed – why you are, or aren’t, pursuing a particular area of effort. The quick assessment of the Fit Matrix gives you a quick way to foster awareness of your activities and directions. Give it a try!

Time to Back Away From Telecommuting? Nope

Yahoo’s CEO Marissa Meyer announced the end of telecommuting at Yahoo. While some decry this as a step backward, the other side of the story is that there was widespread abuse of telecommuting and a lack of accountability. The move might be a de facto layoff, too, if some people would quit rather than work on premises.

But is Yahoo’s action a warning that telecommuting isn’t everything it’s cracked up to be? Nope.

The problem I have with all the arguing over whether telecommuting is worthwhile, or whether Yahoo made the right decision, is this: Your Mileage May Vary.

People keep talking like telecommuting is one thing that works one way, and that it has a consistent, specific set of benefits and disadvantages, for everyone, everywhere, all the time.

Are you more productive in the office or at home? Not everyone has the same answer, and often it’ll depend on the task. Your workplace has resources and distractions. Your home has resources and distractions. There’s no universal answer to say one is always better than the other, for every person, for every task. A report from the Bureau of Labor Statistics (“The hard truth about telecommuting“) says telecommuting “seems to boost productivity, decrease absenteeism, and increase retention.” That’s good news, but it’s a trend, not a universal truth. The BLS report also notes that telecommuters tend to work longer hours, and that telecommuting often falls short on offering a better work-life balance. Here too, a trend is a trend, not a universal rule. Your mileage may vary.

Does a company save money on office space when people are telecommuting? Only if the company removes or reassigns your office space when you switch to telecommuting, and only if the cost savings are greater than any cost increases associated with extensive telecommuting. Does Yahoo have plenty of empty office space and unused office resources sitting around, ready for the returning workers? If so, Yahoo has been wasting money maintaining an environment people weren’t using: heating and cooling, electricity, cleaning services, network connectivity, office supplies, and so on. If not, Yahoo is facing a sizable cost of getting the workplace ready for a big influx of workers. Your mileage may vary.

Ms. Meyer mentioned one area that really does differ between telecommuters and office workers: face time, or the lack thereof. There’s a lot of value and opportunity in the ad hoc communications that can occur when you’re with your colleagues. Communications benefit when you see facial expressions and body language. You lose out on all that when you’re working alone, physically isolated from your colleagues. One telecommuter’s lament (“17 Telecommuting Disadvantages“) is mostly about the lack of face time. Some research suggests that a lack of face time can affect your evaluations (“Why Showing Your Face at Work Matters“).

How do you handle the lack of face time for telecommuters? There are several ways to offset it:

  • In-office days: Arrange for periodic in-office days. Maybe one employee splits up each week by working three days in the office, two days at home; the employee gets some face time, and some isolated time. Maybe the employee comes in once a quarter, and you take full advantage of the opportunity with events or activities that would most benefit from having the person on site.
  • Video conferencing: Some meetings or conversations could work better if you can see the remote people on a screen.
  • Educating staff on audio conferencing: Mostly, problems on audio conferences are the result of people not being used to it. Tips and reminders, or just plain frequent usage, can help.
  • Make online conferencing the norm: Skip the meeting table with a speakerphone in the middle. Have everyone use online meeting tools, whether or not telecommuters are involved, so that your location is immaterial.
  • Acceptance: The offsets above can help reduce the problems of losing face time, but they won’t eliminate them. Another “offset,” therefore, is simply to accept that the reduction in face time is a cost of doing business. If the benefits of telecommuting outweigh the hassles, take a breath and accept it. There are potential disadvantages for those who show up on site, too, but we accept those as a normal cost of doing business.

It Depends: On the Person, the Place, and the Thing

The way to look at telecommuting is that it’s not a universal good or a universal evil. Handle it case by case.

It depends on the person. Is this employee reliable and trustworthy? experienced and resourceful? fully onboarded and acculturated? An employee who gets the organization’s culture and who can work unsupervised is a good candidate for telecommuting. An employee who’s still learning the job, or whose reliability is in question, might need more in-person attention.

It depends on the place. Does this employee have a home environment that’s suitable for telecommuting, including the necessary connectivity and equipment, and a reasonably distraction-free work space? I’d want to make sure telecommuters understand what’s expected.

It depends on the thing. Will the employee be performing “black box” tasks, for which all you care about are the outputs? Does the employee consistently have enough of a workload of such tasks?

Culturally, you might have a challenge convincing the staff that telecommuting isn’t for everyone. You might have a challenge if telecommuting appears to favor some groups over others.

In the end, not everyone gets to telecommute, and not every telecommuter is a 100% telecommuter. I’d rather handle abuses case by case instead of letting a few bad citizens ruin things for the good citizens, but if the abuse has become widespread enough among your telecommuters, it might indeed be time to pull the plug – and time to find out how the abuse got so bad before anyone took useful action.