Value of Information for Product Practitioners

Rory Lynch
6 min readAug 16, 2021

There are two important considerations when making a decision— your confidence you’re making an acceptable choice, and your tolerance for being wrong. As much as we’d like to have perfect knowledge about both the inputs to a system, and the system itself, we can never be 100% sure that any decision we’re making is right, which means, consciously or not, we’re making the choice “am I confident enough in this choice to move ahead?” Here we’re going to explore what happens when you reach that point and you’re not sure if you’re confident enough yet, and what to do when you get there.

Let’s take two extreme examples to show the range of what we’re looking at here.

Example 1 — Low confidence, but very high tolerance for wrongness

Person A, let’s call her Anna, goes to a new restaurant. There’s only 2 items on the menu, and she doesn’t know which one is better. Fortunately, she suspects no matter which one she gets, it’ll be acceptable, so she picks randomly.

Example 2 — Low confidence, and also zero tolerance for wrongness

Person B, let’s call them Blake, goes to a new restaurant. There’s only 2 items on the menu, and they don’t know which one is better. Because they have no tolerance for wrongness (in this silly example), Blake buys both, tries both, and then only eat the one that’s better.

That’s an extreme example, but it’s illustrative so let’s unpack it.

The food critic from Rataouille, saying “I don’t like food, I love it.”
Blake is like this guy — If I don’t love food, I don’t swallow.

The only way to be 100% sure about something in a complex domain (hint: software is a complex domain) is to have already done it. That’s the entire point of the Agile Manifesto. In the example above, the only way Blake could be 100% sure one meal was better than the other was to eat both (why this is true, and what to do about it, is out of scope for this blog post.) The major difference between these two examples was how much weight the person put on being right — and how much money they were willing to spend to reduce their uncertainty.

We’re going to call that amount, the maximum amount a rational person would spend to reduce their uncertainty, the value of information (VoI.)

There’s a few of things we can immediately intuit about the VoI.

  1. The closer you want to push your certainty to 100%, the more expensive it becomes (the ultimate example being Blake above, who was willing to do ALL THE THINGS to completion to find out for sure)
  2. The VoI will never be more than the incremental benefit of being right
  3. If the decision has already been made, the VoI is 0
  4. The VoI can never been less than zero

Now this is a software blog (mostly), so let’s get one thing out of the way here — running software teams is expensive. If you’re thinking about your software architecture and trying to decide which of 3 options is going to be best, you almost certainly can’t afford to do a Blake and build all 3 to completion just to find out; at some point you’re going to have to get our confidence to “good enough”, and then try it and find out.

Let’s do some quick maths to illustrate. My team consists of my self and six engineers (7 people total). We work 40 hours per week, and I’m going to say our burn rate is $110/hour (which is a fairly conservative estimate.) A week of our time is therefor worth $28,000. If it takes 2 weeks to do a proof of concept to trial each architecture, and there’s 3 to try, that would be $168,000 to find out which option is best. That’s our cost to learn all the information.

Is that reasonable?

It depends on how much we already know, and how confident we have to be in order to go ahead. Let’s get into it.

Strictly speaking, the Value of Information is not additive, and must be calculated recursively after every new pieces of information we know. The classic way to calculate VoI is actually using a Monte Carlo simulation, with which you’re probably already familiar. The steps look like this.

  1. Figure out your utility function based on the variables you’re uncertain about
  2. Estimate a range and distribution for each variable¹
  3. Sample each variable
  4. Evaluate the utility function for this sample, and store the output of the utility function
  5. Repeat steps 3 and 4 N times where N is a big number (e.g. N = 40,000)
  6. Calculate the mean of all your utility functions. This is our expected outcome with current information
  7. Take the maximum of all utility functions. This is our outcome with perfect information
  8. Take the difference between (7) and (6). This is the value of perfect information

That’s a lot of steps, and it’s quite complicated. There is specialist software that can help with this kind of analysis, but in many cases you might be able to compare the relative Value of Information of two problems by eye-ball.

There’s one final point to consider here — that’s the maximum a rational person would be willing to spend to reduce uncertainty. In reality it’s impractical to do so because spending the difference to obtain perfect information eradicates the benefit of doing so (to a perfectly risk neutral observer.) What we should probably do is decide on an acceptable percentage of that amount to spend. I can’t help you decide what that amount should be, but a large number of books and articles on the topic suggest 10% as a good rule of thumb (By way of example, that means if we expect 4 million dollars in benefit from having perfect information, we could reasonably spend up to $400,000 to learn what we can.)

How do we approach an acceptable uncertainty without breaking the bank?

I am going to say something which is controversial (but shouldn’t be.) You don’t need perfect information to make a decision, further, we don’t need to measure something perfectly in order for that measurement to be useful. All we have to do is reduce our confidence interval to an acceptable range. There’s a lot of ways to handle this, depending on the exact context, but I’ll talk about one here that might be useful.

You’re probably familiar with the Fermi Estimate, this method uses a similar concept.

  1. Break down the uncertain problem into individual pieces
  2. Evaluate your certainty on each of the individual pieces
  3. Identify the one (maaaaaaaybe 2 but stay focussed) biggest bottle neck to high confidence
  4. Find a way to improve the confidence on that pieces
  5. Repeat steps 2 to 4 until you reach an overall acceptable confidence, or until the Value of Information drops to lower than the Cost of Information

By way of the architecture example above — you might break down the differences in the options and discover there’s one particular requirement you’re not sure you can meet (confidence < 50%), but that the remaining requirements are much easier (> 90%.) Now instead of needing to solve the whole problem at once you can simply find a way to prove that one point. If that brings your overall confidence up sufficiently, you can stop and move ahead. If not, repeat the process with whatever the next biggest question mark is.

Bringing It Home

That’s how I think about the Value of Information, particularly as it relates to software engineering. Information and Decision Theory is a huge domain, and I’ll keep posting and thinking about it as it relates to software as I learn.

If you enjoyed this, please Follow, Clap, or give it a share. Knowing people find value in what I’m writing helps me know I should keep going!

Notes

  1. It’s possible to get really into the weeds with out you do that — the simplest case is providing a uniform distribution across the range you’ve given, but you can also give all kinds of distributions, depending on exactly what you know about that variable. One important thing I’ll point out here is that you don’t have to do this. Remember that even similar linear regression models significantly outperform human intuition in almost every case.

--

--

Rory Lynch

Product person and part-time powerlifter. Agilist. Occasional writer.