Will AI take programming jobs or turn programmers into AI managers?

frustrated man working on laptop

Getty Images/Westend61

Prior to the last ten years or so when I’ve been mostly a full-time advisor, pundit, and columnist, I was a manager. I was a product marketing executive earlier in my career, and a publisher later in my career. I had people directly reporting to me for years.

I managed editors, salespeople, programmers, manufacturing teams, and other executives.

You want to know one of the best things about my encore career? No direct reports. I don’t have to manage anyone.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

People who haven’t been managers think bosses get to spend their time dumping work on underlings and just bossing people around. Managers know that the reality is they spend oh so much time simply trying to get the people who work for them to execute their job duties as instructed.

Some of that falls on the manager, who may or may not give clear instructions. But an equal amount of that challenge falls on the direct reports who misinterpret instructions, passive-aggressively follow directions to the letter (this was my karma payback, because I did this to my bosses), or simply need to be negotiated with to do what needs doing.

Also: How to use ChatGPT to write code

It’s part of why I like programming so much. With programming, the computer will also do exactly what you tell it to do. Exactly. Of course, the precision with which a program follows instructions often leads to bugs, especially on the first try. But that’s okay, because whatever it does wrong is somewhere there, right in the code.

It may be a challenge to come up with the right algorithm or to translate the algorithm and data structures in your head into working code, but code is code. It’s consistent and reasonably predictable.

Then there’s AI. Giving instructions to an AI like ChatGPT is much more like managing a programmer than it is like programming. Everything is subject to interpretation and negotiation. Yes, you can get results, and sometimes you can get results you couldn’t have gotten without a lot of coding, but there’s still some degree of haggling, negotiating, reframing requests, and try after try to get it right.

Also: Okay, so ChatGPT just debugged my code. For real.

You can give an AI a prompt twice and it will return two different results. Unless your code has some sort of randomization function or serious bug, you can run your code twice and it will return the same exact results.

Will AI take programming jobs?

I’ve been giving this question a lot of thought, especially in light of some prompt writing I did this weekend while working on an article on advanced prompt writing. In that article, I tried to get ChatGPT to solve a very simple problem, and it would up taking me hours and more than 20 prompt attempts to get it to work reliably. The prompt was:

Word similar to devolve that begins with a B

ChatGPT kept giving me answers that began with a “D”, seeming fully confident in its answers. When I pointed out that the words it returned did not begin with a “B”, it apologized and made the same mistake. Over and over and over again. It felt very much like I was talking to a particularly stubborn employee, trying to get them to see what I wanted them to do.

Also: How ChatGPT can rewrite and improve your existing code

There was a time when I managed a few salespeople who sold over the phone. They were asked to call a fairly warm prospect list and pitch our services. I gave them an exact description of how to pitch our services, but we had one salesperson who just refused to stick to the script.

As such, some of the people she called were turned into hot leads…until we met with the prospects, only to find out that they had the wrong idea about the services we offered. She liked her description better because it made getting appointments easier.

But it wasn’t about making appointments. It was about making sales. She wasn’t even compensated on making appointments, but that didn’t matter. She liked her way better.

ChatGPT is like that. By the time I spent a few hours trying to get it to return a word beginning with a B, I reached the stage where I wanted to yell at it, “Well, what would it take to convince you that the word DEVOLVE begins with a D?”

I wasn’t coding. I was negotiating. I spent a good part of my Sunday haggling with a robot, all the while thinking, “So this is progress?”

Also: How to use ChatGPT to create an app

I’ve always been fascinated by AI, and we’re at the point where the technology is close to what I dreamed it would become. I’ve worked with AI and the implications of AI as far back as my thesis work in college. And yet, after a few hours, I felt like banging my head against a wall. I wanted to scream at the top of my lungs and tear my hair out.

So it was a lot like managing some of the direct reports I’ve had over the years — and, if I’m honest, a lot like how my bosses felt managing me when I was younger.

I did eventually come up with a reliable prompt, and the article describes why it works. But it became very clear to me that while it looks like AIs might take low-level programming jobs, the fact that the AIs work so much like employees might provide some human worker protection.

Also: The best AI chatbots to try 

The following table shows that there are some tasks where doing coding is easier, and other tasks where using an AI is easier. As you can see, the combination of the two is particularly interesting, but using an AI certainly doesn’t remove the requirement for human skill and expertise.

Code

AI

Getting data

You’ll need to find a large dataset and use a specific API to retrieve individual data items

Just describe what you need and the AI will find it somewhere. It’s easy to do.

Accuracy of data

If the data set is accurate and your code runs correctly, the data will be accurate.

There is no provenance to the data you retrieve. It could even be completely made up by the AI.

Creating instructions

You need to be familiar with how to code and how to design an algorithm, as well as various APIs and language interactions.

If you can describe it, you can generally make it happen by simply telling the AI what you want.

Following directions

Your code will do exactly what you tell it, including make errors if you haven’t fully debugged it.

The AI will roughly interpret what you asked for and will sometimes stubbornly do whatever it wants anyway.

Executing complex instructions and getting reliable results

You need to be an experienced coder with a full grasp of how to construct algorithms and write code.

You need to be an experienced “prompt engineer” with a full grasp of how to specify problems and how they should be solved.

Are skills and training required?

Newbie programmers can do some projects, but real work requires deep understanding of how to get the job done.

Anyone can write simple prompts, but solving complex problems requires deep understanding of how to get the job done.


Here on ZDNET, we’ve run a few articles that spotlight surveys of programmers’ experience using generative AI to help with code. The prevailing impression is that AI can make programmers more productive and help teach more junior programmers new techniques. But as I’ve shown in my numerous programming articles on AI, the code doesn’t always work.

Also: The 10 best ChatGPT plugins (and how to make the most of them)

I have no doubt that AI will transform programming jobs, and take some of the work away from real people. But, at least for the current generation of AI engines, getting anything real done will require some level of expertise, whether that be coding expertise, prompt writing expertise, or — more likely — a mix of both. Plus a healthy dose of patience.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.


Posted

in

by