4 Comments

> And Bezos quote gave the definition of course correcting:

> 1. quickly recognizing bad decisions

> 2. quickly correcting bad decisions

I wonder if there's a difference between (slow recognition + fast correction) vs. (fast recognition + slow correction)? Are recognition and correction equally valuable or is one more important than the other?

Does the nature of the goal affect the importance of course correction? For example, I imagine a more specific, harder to attain goal would require more course correction than a less specific, easier to attain goal? e.g. "Bob wants 10,000 followers on Twitter by the end of the month" would probably require more course corrections than, "Bob wants between 10 and 10,000 followers in the next 6 months".

In the latter case, would it be more important to move faster, with somewhat poor (but still some) course correction?

Expand full comment

> there's a difference between (slow recognition + fast correction) vs. (fast recognition + slow correction)?

In my mind, i see it as like a linear value stream ( https://en.wikipedia.org/wiki/Value_stream ) where recognition is upstream of correction, so you literally cannot have correction without recognition.

When you phrase a question like fast recognition better or fast correction better, based on the value stream perspective, it sounds (to me) like optimizing local throughput.

I care about global throughput more than local throughput. See Theory of Constraints about why global throughput matters more https://fortelabs.com/blog/theory-of-constraints-102-local-optima/ So my response is whichever gets me a higher global (end-to-end from making the mistake to actually correcting it) throughput.

> more specific, harder to attain goal

Actually speaking of goals, in my mind I now have a simple if-statement

1. if it's a simple, totally doable (e.g. I have done it before a few times), goal-setting, move fast, etc are all good

2. if it's a complex thing like grow an audience, etc, I am less sure about the effectiveness of goal-setting, and moving fast.

I am influenced by the arguments against goal setting set out in "Why Greatness Cannot be Planned". They call it objective paradox.

Using a chinese finger trap as illustration https://www.youtube.com/watch?v=PSnw-PHLUxY

it's like the harder you try, (or the more you optimize for the KPI), the further you get away from your (complex) goal.

And also they have a great term for such complex goals where the better your KPI, the worse it is: deceptive objectives.

Not to plug my own piece on purpose, I'm halfway writing my commentary on the book Why Greatness. I have put out part 1 at https://www.entrepreneurial.engineer/p/op-1

Expand full comment

> When you phrase a question like fast recognition better or fast correction better, based on the value stream perspective, it sounds (to me) like optimizing local throughput.

> I care about global throughput more than local throughput.

Right, it's like what they say in Operations Research, "locally optimality is not global optimality." I get that. But in achieving a globally optimal "throughput," you have to tune "local" variables that may be weighted differently, and I guess I saw recognition and correction as variables you need to tune for global optimality. So, in that sense, you would have to care how those factors weigh against each other to acheive a globally optimal solution.

>I am influenced by the arguments against goal setting set out in "Why Greatness Cannot be Planned". They call it objective paradox.

100%! This sort of reminds of the Viktor Frankl quote:

"Don’t aim at success. The more you aim at it and make it a target, the more you are going to miss it. For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side effect of one’s personal dedication to a cause greater than oneself or as the by-product of one’s surrender to a person other than oneself."

Scott Adams has also espoused using systems over goals, so I can totally understand the arguments against goal-setting.

Expand full comment

> you would have to care how those factors

Strong agree. Yes i have to.

I am less confident of talking about it in an abstract, one-size-fits-all way. It's possible it can be generalized e.g. I'm leaning towards faster recognition, but I'm not 100% confident.

I can imagine as a counter-example in specific, concrete situations where I have to improve the end-to-end course correcting iteration speed.

Because there's no room for improvement on the recognition side, but there is still room on the correction side.

> Viktor Frankl , Scott Adams

I just came across another substack article that drew the same parallel at this specific section: https://engineeringideas.substack.com/i/44987032/parallels-with-frankl-and-watts

The full article drew a lot of parallels with other authors.

Scott Adams popularized the whole "systems >> goals" so well that it's now mainstream. To his credit.

I prefer Stanley et al's treatment because it's more precise.

Differences between the whole "systems >> goals" and "Why Greatness" book

1. Why Greatness doesn't disagree with goals. It says it doesn't work so well for complex domains, ambitious achievements, and "deceptive objectives"

2. S >> G uses the term "goals". "Goals" is a friendlier term than "objectives". Objectives is the more technically accurate term. Usually the friendlier term wins mindshare in mainstream.

3. Why Greatness sets out more details such as instead of optimizing for KPI, optimize for interestingness (and gives the evidence for it). Also, focus on meeting constraints to survive long enough for serendipity to find you. Both are missing from the "S >> G" thesis and way more actionable.

Expand full comment