AI Design Principles: Choosing the Right Problem – Part 2

Part 2: Begin with a Decision that Many People Make Often, and Make Quickly

You’ll shoot yourself in the foot if you try to solve the sort of problem only a 10th-level wizard specializing in conjuration makes the first Tuesday of every prime-numbered year. Decisions made rarely or by few tend to be very difficult or to have little generalizable utility.

  1. What is the ideal flux density be for each individual magnet in a particle collider with maximum experimental resolution around the 125 GeV range?
  2. Is Dragon’s Egg or Mission of Gravity a better first novel to study in a Hard Science Fiction class?

Sure, it might be fun to build an AI that could actually solve these problems, but for now it’s much more efficient to leave rare problems to humans. Remember – we spend most of our time mired in decisions everyone makes.

Begin with a decision problem where

  • it takes less than a minute to make the decision
  • lots of people make this sort of decision, and
  • people who make this decision tend to do it often.

This is really a litmus test for deciding whether a problem meets the requirements mentioned in Part 1. A problem that passes this test will satisfy many of the requirements.

Choose a decision problem where it takes less than a minute for a person to make this decision.

It should take less than a minute to make this sort of decision. If it takes one minute to make the decision, you can only manually check about 500 samples per day. Beyond this point you lose the ability to reasonably manually verify the correctness of your results. That is, assuming you have requirements calling for over 90% accuracy. If you don’t have a lot of labeled data, this small of a return on time invested in labeling data that takes long to label isn’t usually worth it. I’d also question: if it takes more than a minute to make the decision, is it really not reducible to a set of smaller decisions?

Say you’re looking to make an AI to help decide what expensive watch to buy. Things that might go through your head when making the decision might include:

  • Is it within my budget?
  • Is it the color I want?
  • Is it available in my size?

These are simple problems that an AI assistant could use to automatically discard watches that aren’t worth considering, leaving you to focus on:

  • Is it comfortable?
  • Do I like the style?

Further, the easily automatable pieces of the problem are generalizable. They aren’t just applicable to watches, but to shoes, shirts, and a variety of other clothing items and accessories.

Choose a decision problem where lots of people make this sort of decision.

If many people can make the decision, you can easily check your work by collaborating with them. You’ll know you’ve found one if there is an entire profession with people constantly making this decision.

The decision-makers are your source of requirements (hint: you’re making this algorithm to automate away the more tedious of the decisions they make; you’re building it for them). They’re a great source for labeled data. If labeled data isn’t easy to come by, you can usually

  • outsource data labeling to them,
  • consult them on tricky cases, and
  • literally use their daily work as training data.

Most preferably, as the designer of the algorithm YOU should know how to make this sort of decision. If you learn how to make the decision, or at least get some intuition, it saves you a lot of time when verifying system performance.

Choose a decision problem where people who make this decision tend to do it often.

This gets at the amount of data which may exist, or is easily generatable. If it’s a once-per-year decision, you aren’t likely to have a lot of historical data to base your model on.

If your system is designed to be used in users’ everyday lives, they will see the improvements from your system constantly. Highlighting common potential errors in documents as they’re typed is a prime example: people need help ensuring they typed a word correctly or haven’t made some easily-catchable mistake. Instead, they’re free to focus on what they actually want to say rather than whether it’s really spelled “concured” or “concurred”.

If it cost me nothing, given the choice between not having to make an almost-mindless near-daily decision and one I’d have to spend hours contemplating but may only do once in my life, I would choose to do away with the former. What’s great is it usually ends up being simpler to automate as well, so not only is the cost lower, but the reward is greater.

AI Design Principles: Choosing the Right Problem – Part 1

Part 1: Begin with a Simple, Easy Decision Problem

DecisionProblem

If there’s a big mistake people make when designing machine learning systems, it is deciding to tackle the wrong problem. Pick the wrong problem, and you can easily spend months bashing your head against one that would require a research team years to solve properly. Most companies don’t have that magnitude of resources to devote to solving an individual problem, and it isn’t cost effective anyway.

How do you decide what problem to solve? Begin with a simple, easy decision problem.

What is a “simple, easy decision problem”?

simple, easy decision problem is one that

  • involves automating making a decision
  • has a well-defined set of mutually-exclusive possible decisions
  • has explainable decisons that reasonable people would agree on
  • is so simple that it cannot be decomposed into smaller problems worth solving
  • has a fast, easy solution

Why?

Choose a decision problem to automate. Decision problems automate thought processes that humans do. If you really just want to understand your data, then at this stage you really want data exploration and analysis (possibly using machine learning). In general you wouldn’t automate a process like k-means clustering – you’d do it with a specific purpose in mind. On the other hand, you would automate a process like “Should I switch the light in the north-south direction at this intersection to red?”

Choose a decision problem with a well-defined set of possible decisions. If the set of possible decisions the machine might make is unbounded, or it isn’t clear what a decision means you’ll have problems. If there are infinite (or simply so many a human couldn’t reasonably consider all of them) number of possible decisions, the simplest algorithm which can produce all possible answers is very complex. You lose the ability to train on each set of possible responses as gathering data on every one may be impractical.

On the other hand, deciding “Which one of these ten genres does this book fall in based on its title and text?” is well-defined. As a person making the decision, you list out the possible genres and pick the one the book best falls in. The process for the machine is analogous – it may calculate scores for each genre and pick the top one.

Choose a decision problem with a set of mutually-exclusive decisions. If there are multiple correct decisions for the same set of inputs, you lose the ability to specifically train the model. If any combination of answers is permissible and you have more than, say, 20, you really have an answer space with over a million possible choices. It also complicates training the model. Say you’re training a chat bot to answer natural language questions users pose. If the chat bot gives three of five essential pieces of information when responding to a particular question, “how correct” was it, and how should the interaction be counted in training or evaluation?

Choose a decision problem where reasonable people would make the same decisions. Suppose you’re building a system to pick a wall color palette for a room based on the furniture the owner already has. If you ask five interior designers you’ll get five reasonable, but different, palettes. Do you use all of these as positive training examples? How do you evaluate whether it made the right decision on a new room and set of furniture?

Choose a decision problem that is so simple that it cannot be decomposed into simpler decision problems. Let’s say you’re building an app that automatically generates a grocery list for users based on what they have in their refrigerator. Considering every possible shopping list simultaneously would be a nightmare. Instead, we can decompose this into one problem for each food item, and a larger problem that merges these decisions into a single grocery list.

The model for deciding whether to include each item on the list might based on:

  1. Does the user currently have any of this item (or similar) in their refrigerator? Is it expired?
  2. How much does the user usually consume per day? Week? Month?
  3. Is the user’s past consumption of this item regular or spurious?
  4. Does the user already have items that can be used in many recipes with this item?
  5. Is this item easily available to the user?
  6. Has the user indicated they are allergic / don’t like this item?

The model that merges these sub-solutions might be based on:

  1. How wide a range of recipes does this list, plus their at-home stock, allow? (Also, prioritize recipes the user has favorited, or are similar)
  2. Does the user have enough available space to store everything on this list?
  3. What items can be eliminated that cause the smallest decrease in potential recipe variance?

At the point where we’re combining the outputs of many decision models we’ve technically diverged from decision problems, but I think the idea of merging solutions this way is powerful. The point is the smaller per-ingredient decision problems are individually good starting points.

Choose a decision problem that has a fast, easy solution. Greedily, the longer before you have a working prototype or implementation, the longer before you see returns from the work that went into your work. More importantly, the sooner you see how implementing the model made improvements in someone’s workflow or life, the faster you’ll be able to iterate on the model to make it even better. You’ll get quick feedback on a simple solution, so you’ll be able to grab more low-hanging fruit if you want to improve this model. If the solution is good enough for now, then it frees you to go and work on another problem!

The Revelation Principle and Salary Negotiations

Salary negotiations are hard. It’s an information asymmetric game where the job candidate is trying to maximize their salary, and the recruiter is trying to hire them for the lowest price possible.

Both have remarkable incentive to misrepresent themselves. The recruiter’s strategy is to lowball the candidate – trying to get them to revise their self-worth downward. The candidate’s strategy is to highball – anchoring their perceived value to this higher value.

This is poisonous for a variety of reasons. The candidate may take a salary that’s less than they’re happy with, which is one of the most common reasons for attrition. It puts employers in the position of taking advantage of someone even before they’re hired. Most importantly, the game pits the prospective new hire against the very company that wants their loyalty!

We can use the revelation principle to turn this into a collaborative game where the optimal strategy for both the prospective candidate and the employer is to honestly say what they think. In fact, the 2007 Nobel Prize in Economics was awarded to the team that proved that any game where parties have private information can be turned into a game where the best strategy is to be perfectly honest.

But, what do we need to do to get this?

Let’s start with the Vickrey auction as inspiration. Standard auctions generally operate in one of three ways, (1) increase bids until we are left with a price only one person is willing to pay, (2) decrease price from a starting amount until someone is willing to pay, or (3) bidders secretly submit bids and the highest bid wins and is paid. Both of these encourage elaborate strategies that lead to suboptimal bidding. The Vickrey auction is just a slight variation on (3), where the highest bidder pays the second highest bid. See the Wikipedia page (linked above) for a sketch of the proof showing why it rewards honesty.

But that’s bidding. What about salary negotiations? All you need is two pens and two pieces of paper.

Write down how much you think you’re worth on a piece of paper. Have your prospective employer write down how much you’re worth to them. If they write down that you’re worth as much as or more than you think you are, go with your number. Otherwise, thank them for their time and go elsewhere.

Strategies

Terms

Truthful – making an offer you think reflects your actual value
Untruthful – making an offer you think does not reflect your actual value
Underbidding – offering less than the candidate’s amount
Bidding – offering exactly the candidate’s amount
Overbidding – offering more than the candidate’s amount

Employer Strategies

The dominant strategy is for both you and the prospective employer to be honest. Sure, you may miss out on money but you’re already at an amount you’ve agreed you are happy with.

  • The employer is equally incentivized for truthful bidding and truthful overbidding. Their bid doesn’t change how much they pay you, so the strategies have equal value.
  • The employer is incentivized to truthfully underbid. If to them you’re not worth as much as you think you are, they should not hire you.
  • The employer is discouraged from untruthful overbidding – they end up paying you more than they think you’re worth.
  • The employer is discouraged from untruthful underbidding – they miss out on getting you at all.

Candidate Strategies

Underrequesting – untruthfully requesting less than you’re worth
Overrequesting – untruthfully requesting more than you think you’re worth

For candidates, as with employers, the dominant strategy is to provide a value that is an honest assessment of your worth.

  • You are discouraged from underrequesting. Of course you want to be paid what you’re worth! Being paid less than you think you’re worth greatly increases the chance you’ll soon be looking for a job soon – for a higher amount.
  • You are encouraged to overrequest only if you think the employer vastly overestimates your self-perceived worth. If you think you’re worth $80,000 and overbid $100,000, you must believe that, given that the employer will offer at least $80,000, they are over 80% likely to offer $100,000 or more. In most cases this is unlikely.
  • You are discouraged from overrequesting otherwise as it unnecessarily increases your risk of not being hired by an employer who values you at least as much as you’re worth.

To counter to candidate overbidding, of course, the employer must get an accurate representation of your skills. Or, at least, convince you that they have.

Final Thoughts

The outcomes and strategies of this game probably change considering that (1) you probably interview for multiple positions and (2) companies usually interview multiple candidates for a single position. I’ll have to think about what the implications are, as they could change.

Tribe of Mentors and Cybernetics

71gFZnfHXIL

Tim Ferris sent hundreds of successful people a list of the same 11 interview questions and collated the results of the 140 responses into a book, Tribe of MentorsThis is exactly the type of data a cybernetic approach is for. The interesting things here are less what any individual interviewee said, but patterns in what they said collectively. What were their roadblocks? What was important to them? How did they disagree with each other?

One of my favorite quotes directly attacks the premise of the book: “[Advice] is almost always driven by anecdotal experience, and thus has limited value and relevance .… Ignore advice, especially early in one’s career. There is no universal path to success.” (John Arnold, page 374) It should say something that when retraversing the book to find this quote I stumbled on five similar ones. I agree with the sentiment, but John Arnold misses a broader point, succinctly said by Matsuo Basho as “Do not seek to follow in the footsteps of the wise; seek what they sought.” It is not that there is nothing to be learned by listening to advice, it is that advice is not transferable without understanding context or principles. The reasoning behind a conclusion is more useful than the conclusion, and patterns of reasoning across many mentors is even more so. From a single person’s reasoning you can follow their logic and determine how believable their conclusion is for yourself. From the reasoning of many people you have the opportunity to develop principles that you can apply to other contexts. So sure, there is no universal path or even a universal map. But in seeing how many others read the maps of their lives, maybe you can learn to read your own.

Conflicting advice is the best source of this sort of direction. Fortunately, Tribe of Mentors is full of conflicting advice. Tim did an excellent job positioning similar people with strong disagreements – at times I had the thought “didn’t they just say the opposite thing?”, but when I turned back a few pages I saw it was from a different interviewee. The myriad dissonant voices blur together into beautiful higher-order concepts.

For example, here’s a smattering of work life advice from the book:

  • “You should set up your life so that it is as comfortable and happy as possible.” (Susan Cain, 13)
  • “Ignore anyone who tells you to go for security over experience.” (Patton Oswalt, 106)
  • “Advice they should ignore: … Avoid risk. Play it safe.” (Josh Waitzkin, 197)
  • “I do not believe in work-life balance.” (Debbie Millman, 29)
  • “Burnout is not the price you pay for success.” (Arianna Huffington, 214)
  • “Growth and gains from from periods of rest.” (Amelia Boone, 130)

For this I’ll take the position that “The face is that when two extreme opinions meet, the truth lies generally somewhere in the middle.” (Annie Duke, 172) I’d go a step further and claim that not only does the truth of this lie somewhere in the middle, but everyone has to figure out where they fall on the spectrum themselves. Sure, I could be deluding myself that a sense of security is required for my creative work just as someone else might incorrectly think they perform best in adversity. Who are we to question someone else’s self-experience? The best we can do is show them that there are other paths and give them the tools for assessing the one they’re on.

Tribe of Mentors and Meditation

Daily meditation was a common theme in Tribe of Mentors. Is it a fad, or is there something to it?

Meditation may be overhyped, but there’s probably something to it. If meditation makes you feel better, do it. Meditation seems to make me feel better, but my anecdote should not be a compelling reason to you (neither should the similar anecdotes of 80% of Tim’s interviewees). Science-Based Medicine has an excellent article on the current scientific understanding of mindfulness meditation and what would need to be done to be certain of its impact. Here’s a meta-analysis of 47 studies suggesting there is a small causal relationship between meditation and reducing anxiety, depression, and chronic pain. This doesn’t mean that meditating will make you as successful as the interviewees in Tribe of Mentors. At best the data from Tribe of Mentors suggest a relationship between (likelihood of being in Tim’s book) and (likelihood of the regular meditation). It could easily be that successful people who meditate are statistically more susceptible to Tim’s charm.

Building Cities

I need to write about something that matters to me.

I like building. That feeling of planting a small seed and having an impact on it through every stage in its development. Coming to know every vein and leaf: its history, its purpose, and its future. The small sadness of pruning a tiny masterpiece to make room for a larger, better picture. The grand vision of “something wonderful” that slowly takes shape.

20180429214408_1.jpg

I’ve always been fascinated by simulations. That solving problems in artificial environments with rules far simpler than reality can not only challenge but give insight into the real world. It’s what brings me back to Sim City 4, a game I’ve been continually playing for fifteen years.

When I began playing I avoided complex, mountainous terrain because it meant that I couldn’t fit as many people into my city as possible. There was some “maximum score” of population per square kilometer, and I had to reach it. In returning to the game as an adult, I’ve come to value both the challenge of designing a city around hills and rivers, and the beauty of a city that adapts to these features. Designing for a perfectly flat plain turns into an abstract math problem; it no longer holds my interest as it feels like solving something one too many steps divorced from life.

Working in a simulation a wholly different mindset than programming, but it has given me so much intuition towards solving problems with logic automation. I can’t tell the game “automatically build a road that follows the contour of the mountain.” I can’t say “replace old power plants when they stop functioning properly.” I must do it all myself. Not that it wouldn’t be a rewarding experience to do so – when programming city simulators for myself I often automate away such things, but the experience is different. The act of deciding where to place a road, of having to regularly check on how well a wind turbine is doing, make me feel more like the city is something I’m creating. Automating that feels like teaching a mind that gets to have all the fun.

20180429215108_1.jpg

This is very much a game where “the fun is in the journey, not the destination”. I’ve had people over the years ask me what the point or goal of the game is since the game doesn’t directly give you one. The goal is not to have built cities, but building cities.

 

Post Script: Mods

I can’t play Sim City 4 without plugins. The SC4 modding community has remained strong all these years – they’ve made many quality-of-life improvements as well as changes that make the game fulfill the creators’ intents better than the original. These are the ones I currently play with:

CrimeDoesntPay – makes crime in big cities reasonably manageable

IH_census – makes high tech industry offer high paying jobs

Network Addon Mod – makes public transit and traffic flow better, as well as fixes many traffic bugs

OperaHouse – the original behavior was bugged

radius_doubler – makes many services have more reasonable service radii, especially in low density cities

SPAM – fully builds out the farming system which wasn’t finished for the final game

Planning and Entropic Forces

The Foundation series is a classic science fiction series by Isaac Asimov. In it Harry Seldon, a scientist of the fictional field of psychohistory, predicts the collapse of the galactic empire by modeling the future of humanity with entropic forces. Seldon devises a plan to ensure the best future for humanity after the collapse. He forecasts a thousand years of future, then sets up a series of actions in motion to ensure humanity is able to go down a highly unlikely specific path and end up at the desired future.

I think he got it wrong.

By focusing on one path, Seldon limits the potential good futures to one very unlikely path. Spoiler: guess how that worked out. Despite understanding how entropic forces made the end of the empire all-but-inevitable, he never turns this idea to humanity’s advantage. He should have taken actions that maximized the potential future paths with desirable states.

What does this mean?

The more ways a plan has of succeeding, the more robust it is. Robust plans don’t fail at the first unforeseen challenge. Consider an example from technology – database redundancy. If everything is stored in one database and the database goes down, the entire system stops working. However, if there are three copies of the database (far enough away that failures are independent) then the likelihood of the entire system going down are minuscule. As we increase system redundancy, the entropy of a state where all systems are failing approaches zero, and the entropy of states we want grows without bound.

(In this situation there are practical redundancy limits, and the best practice is usually to go for at least two more than the number of databases for the system to run under the heaviest expected load. It may be interesting to explore the math later …)

How do I use this in planning?

Have more than one path towards what you consider “success”. Suppose your goal is to work in position X at Company Y. Naively, your path forward is to apply for position X. You have some probability of getting the position, and you either get it or you don’t.

There are more paths than these.

What are other paths that accomplish your needs? Do you need the position right now or can it wait a year? Does it need to be that position at that company, or are there similar opportunities at the same (or at similar) companies? Is it acceptable to apply, fail and get feedback, then try again? Is there, perhaps, an intermediary stepping-stone job? A degree that might increase your chances? A meetup group where you might run into employees who could refer you?

The answers to all of these depends on your constraints – your requirements. If you want one specific future then you really only have one option and entropic forces will (statistically) work against you. If more than one future – or more than one path – is acceptable, then you can use them in your favor. In general the more paths and the more likely you make the success of the paths, the higher the entropy of desired futures is.

But what does that look like?

I recently switched jobs. I decided I wasn’t happy on my current team and I needed a change. There wasn’t anything urgently wrong, so I was fine with waiting up to a year. After thinking about the sorts of changes I wanted, I realized there were three main categories of futures I would consider a success:

  1. finding a team I would be more happy with at my current company,
  2. finding a new company I would be more happy at, or
  3. going back to school for a graduate degree.

I added constraints to each future. For example, there are companies with cultures I would never want to work at. I wanted to stay in the Bay Area, so that limited both schools and companies I could look into.

Then, I broke each general future into a general sequence of tasks required to make that future happen and ones that would make the future more likely. Required tasks generally fit into two groups: ones that cut off my ability to pursue other paths (e.g. accepting a job offer) and ones that moved the path along while changing (usually reducing) the likelihood of the others (e.g. doing an onsite interview). While the required tasks varied greatly, the optional tasks presented a lot of overlap in the futures: networking, studying, and understanding my desired team qualities were common to all three.

By doing the tasks that made all of the successful futures more likely, I increased the entropy of futures where I succeeded in my goal. My plans were robust in that there was no single point of failure that could topple them – there were more companies to apply to, many possible teams, and if those didn’t work out in a year then I also had applications for school in the works. Throughout it all, I would be gradually improving the likelihood of each subsequent attempt.

Briefly, these are the steps this approach suggests:

  1. What futures do you consider successful?
  2. What are your constraints on these futures?
  3. What increases the likelihood of each future?
  4. What are common actions that increase the likelihood of multiple futures?
  5. In general do as much towards the common actions as your constraints allow, and do specific actions as necessary.

Status Update 2017-10-08

In the past year (for fun) I’ve written nearly a hundred thousand words for myself, and another ten thousand in personal correspondence. I think it’s time I started writing publicly again.

Healing Through Cooking 2

It’s been about a month and things are … better. Brand new place, spent a few weeks out of town, changing my wardrobe, typical post-breakup stuff. Not having my own place – or cooking equipment to call my own – really put a damper on my ability to cook the past month. One new lease and Amazon shopping list later, that’s now taken care of. I found that my favorite food blog, Serious Eats, has an article just for my situation.

It feels nice to cook everything for myself for the first day in a long while.

Breakfast

It became a ritual for me in college. Fry bacon in a skillet, then fry a couple eggs in the bacon fat once it’s done. Flip over the eggs in the skillet, then immediately serve. When there are about five minutes left on that, butter up some bread and toast it in the oven at 400F for 5 minutes. Unlike college, I’ve now got a cast iron skillet to do this in. The main difference is that the eggs cook very quickly since cast iron holds heat well. I’ll have to get used to the new oven and adjust to it as well. It’s not great, but it’ll do what I need if I figure out the timing.

Lunch

French onion soup. I know what you’re thinking: two hours?! Nope, that’s what’s great about having a pressure cooker – it cuts the time to 40 minutes. As far as time goes, it also helps I made it a couple days ago to share with a friend. Before then I’d only ever eaten it at La Madeline. Cooking it – and today reheating it – filled my entire apartment with the smell of caramelized onions. I can’t complain. It is incredibly filling, and there’s no way I’ll be able to finish the leftovers before they go bad in about a day from now. When I make it again I’ll make sure to have enough friends nearby to consume it.

Snack

One from Alton Brown’s Everyday Cook. Specifically, the savory Greek yogurt dip. Been munching on it throughout the afternoon with some carrots and celery. I’ll have to try it with potato chips next time – really want to see how it goes with starchy saltiness.

Dinner

This was the first meal I cooked once I had the barest essentials in my apartment. Surprisingly I hadn’t ever made it for myself, but I’m going to make it regularly from now on. Steak, mashed potatoes, and sauteed mushrooms. It’s about as American as a meal can get.

Healing Through Cooking

Life sucks. I’m going to use cooking to make it better.

Long story short, I’m living out of a coworker’s house for the week. I’ve just experienced the worst two weeks of my life. They’re finally over, and I need to heal. Cooking is going to be part of that.

For my birthday, I bought myself a copy of Alton Brown’s new cookbook, Everyday Cook. Over the weekend I sat down in coffee shops and planned stuff to make this week. Most importantly: Bad Day Bitter Martini.

The hardest part of the Bad Day Bitter Martini was crushing the ice. After folding the ice in a clean towel as directed, I smashed the ice between two cutting boards.

Obviously, I made the strong version so I didn’t need no fancy Boston-style cocktail shaker. I’m not really a fan of bitter alcoholic drinks. I’m a sweet and fruity kinda guy. Basically, if it’s bright pink and basically candy, I’m in. I was pleasantly surprised that this simple drink was quite delicious. Normally bitter drinks make the back of my throat involuntarily contract as if it’s afraid to let anything else down. This … didn’t. I can tell the aroma of the grapefruit peel played a big role in this, but what’s doubly odd is that I really don’t like grapefruit. Somehow the bitter-sweet of the Amaro and the bitter-sour of the grapefruit balanced to make something palatable to me.

Unfortunately, my host’s wife has gone to bed, so it’s too late for me to make another (crushing ice is loud).