AI Design Principles: Choosing the Right Problem – Part 1

Part 1: Begin with a Simple, Easy Decision Problem

DecisionProblem

If there’s a big mistake people make when designing machine learning systems, it is deciding to tackle the wrong problem. Pick the wrong problem, and you can easily spend months bashing your head against one that would require a research team years to solve properly. Most companies don’t have that magnitude of resources to devote to solving an individual problem, and it isn’t cost effective anyway.

How do you decide what problem to solve? Begin with a simple, easy decision problem.

What is a “simple, easy decision problem”?

simple, easy decision problem is one that

  • involves automating making a decision
  • has a well-defined set of mutually-exclusive possible decisions
  • has explainable decisons that reasonable people would agree on
  • is so simple that it cannot be decomposed into smaller problems worth solving
  • has a fast, easy solution

Why?

Choose a decision problem to automate. Decision problems automate thought processes that humans do. If you really just want to understand your data, then at this stage you really want data exploration and analysis (possibly using machine learning). In general you wouldn’t automate a process like k-means clustering – you’d do it with a specific purpose in mind. On the other hand, you would automate a process like “Should I switch the light in the north-south direction at this intersection to red?”

Choose a decision problem with a well-defined set of possible decisions. If the set of possible decisions the machine might make is unbounded, or it isn’t clear what a decision means you’ll have problems. If there are infinite (or simply so many a human couldn’t reasonably consider all of them) number of possible decisions, the simplest algorithm which can produce all possible answers is very complex. You lose the ability to train on each set of possible responses as gathering data on every one may be impractical.

On the other hand, deciding “Which one of these ten genres does this book fall in based on its title and text?” is well-defined. As a person making the decision, you list out the possible genres and pick the one the book best falls in. The process for the machine is analogous – it may calculate scores for each genre and pick the top one.

Choose a decision problem with a set of mutually-exclusive decisions. If there are multiple correct decisions for the same set of inputs, you lose the ability to specifically train the model. If any combination of answers is permissible and you have more than, say, 20, you really have an answer space with over a million possible choices. It also complicates training the model. Say you’re training a chat bot to answer natural language questions users pose. If the chat bot gives three of five essential pieces of information when responding to a particular question, “how correct” was it, and how should the interaction be counted in training or evaluation?

Choose a decision problem where reasonable people would make the same decisions. Suppose you’re building a system to pick a wall color palette for a room based on the furniture the owner already has. If you ask five interior designers you’ll get five reasonable, but different, palettes. Do you use all of these as positive training examples? How do you evaluate whether it made the right decision on a new room and set of furniture?

Choose a decision problem that is so simple that it cannot be decomposed into simpler decision problems. Let’s say you’re building an app that automatically generates a grocery list for users based on what they have in their refrigerator. Considering every possible shopping list simultaneously would be a nightmare. Instead, we can decompose this into one problem for each food item, and a larger problem that merges these decisions into a single grocery list.

The model for deciding whether to include each item on the list might based on:

  1. Does the user currently have any of this item (or similar) in their refrigerator? Is it expired?
  2. How much does the user usually consume per day? Week? Month?
  3. Is the user’s past consumption of this item regular or spurious?
  4. Does the user already have items that can be used in many recipes with this item?
  5. Is this item easily available to the user?
  6. Has the user indicated they are allergic / don’t like this item?

The model that merges these sub-solutions might be based on:

  1. How wide a range of recipes does this list, plus their at-home stock, allow? (Also, prioritize recipes the user has favorited, or are similar)
  2. Does the user have enough available space to store everything on this list?
  3. What items can be eliminated that cause the smallest decrease in potential recipe variance?

At the point where we’re combining the outputs of many decision models we’ve technically diverged from decision problems, but I think the idea of merging solutions this way is powerful. The point is the smaller per-ingredient decision problems are individually good starting points.

Choose a decision problem that has a fast, easy solution. Greedily, the longer before you have a working prototype or implementation, the longer before you see returns from the work that went into your work. More importantly, the sooner you see how implementing the model made improvements in someone’s workflow or life, the faster you’ll be able to iterate on the model to make it even better. You’ll get quick feedback on a simple solution, so you’ll be able to grab more low-hanging fruit if you want to improve this model. If the solution is good enough for now, then it frees you to go and work on another problem!

1 Comment

Leave a Reply