Prior to the last ten years or so when I’ve been mostly a full-time advisor, pundit, and columnist, I was a manager. I was a product marketing executive earlier in my career, and a publisher later in my career. I had people directly reporting to me for years.
I managed editors, salespeople, programmers, manufacturing teams, and other executives.
You want to know one of the best things about my encore career? No direct reports. I don’t have to manage anyone.
Also: 7 advanced ChatGPT prompt-writing tips you need to know
People who haven’t been managers think bosses get to spend their time dumping work on underlings and just bossing people around. Managers know that the reality is they spend oh so much time simply trying to get the people who work for them to execute their job duties as instructed.
Some of that falls on the manager, who may or may not give clear instructions. But an equal amount of that challenge falls on the direct reports who misinterpret instructions, passive-aggressively follow directions to the letter (this was my karma payback, because I did this to my bosses), or simply need to be negotiated with to do what needs doing.
Also: How to use ChatGPT to write code
It’s part of why I like programming so much. With programming, the computer will also do exactly what you tell it to do. Exactly. Of course, the precision with which a program follows instructions often leads to bugs, especially on the first try. But that’s okay, because whatever it does wrong is somewhere there, right in the code.
It may be a challenge to come up with the right algorithm or to translate the algorithm and data structures in your head into working code, but code is code. It’s consistent and reasonably predictable.
Then there’s AI. Giving instructions to an AI like ChatGPT is much more like managing a programmer than it is like programming. Everything is subject to interpretation and negotiation. Yes, you can get results, and sometimes you can get results you couldn’t have gotten without a lot of coding, but there’s still some degree of haggling, negotiating, reframing requests, and try after try to get it right.
Also: Okay, so ChatGPT just debugged my code. For real.
You can give an AI a prompt twice and it will return two different results. Unless your code has some sort of randomization function or serious bug, you can run your code twice and it will return the same exact results.
Will AI take programming jobs?
I’ve been giving this question a lot of thought, especially in light of some prompt writing I did this weekend while working on an article on advanced prompt writing. In that article, I tried to get ChatGPT to solve a very simple problem, and it would up taking me hours and more than 20 prompt attempts to get it to work reliably. The prompt was:
Word similar to devolve that begins