Skip to main content
An old sawmill with a water wheel sits in front of a bunch of trees. The wood is very weathered, the sun bright, and it is mid-summer.
Mabry Mill, milepost 176 on the Blue Ridge Parkway. Staff at this small living-history museum demonstrated much of the labor required for daily life just 120 years ago. Although we might think of much of the work as "manual," even without technology the mill represented a significant degree of mechanization and labor-saving technology.
Source: John Williams

18 months (or so) with AI

It seemed like a good time to do a check-in on where my head is at with generative AI these days, especially since my country’s right wing seems to think most government employees can now be replaced with AI. (I don’t know if they actually believe that — it’s hard to tell what MAGA believes and what is merely pretext — but it’s what some of them are telling their constituents.) This impulse is what a lot of us expected, of course. Some folks just can’t resist the opportunity to rid their organizations of troublesome, expensive humans and replace them with theoretically more compliant, less needy computers. There are, of course, plenty of voices claiming that AI helps but can’t replace humans, so jobs won’t be lost… but that’s a hard argument to make to anyone paying attention.

I remain frustrated by a lot of arguments suggesting we all refuse to use AI because it threatens to replace our jobs, especially since so many jobs have disappeared because of automation over pretty much the entire history of automation. It’s especially frustrating coming from web workers who’ve replaced skilled press operators, typesetters — pretty much most of the printing industry.

At a minimum that’s a lack of awareness. At worst, there’s a classist assumption that so-called professional jobs are supposed to be exempt from automation. That’s the “I don’t want AI to write my emails, I want it to clean my bathroom” argument. Your job, but not my job.

Closeup of a three-color handknit swatch in alternating triangular patches of yellow, green, and turquoise.

Handknitting, once a critical life skill, has largely been supplanted by automated, mechanical manufacture of clothes and fabric goods. The hobby that remains is a passtime — and an often expensive one, at that.

The past teaches us that those who resist useful automation can’t really stop it, and much of our lives are shaped by the results of this process. That’s just about everything around us — the clothes we’re wearing, the toast we had with breakfast, even the music we listen to. So I’ve felt it necessary to engage with AI lest my profession move on without me.

The question for me remains: is this a useful technology? Because the useless, or the not sufficiently useful, technology tends to disappear with barely a whimper.

Remember all the “metaverse” hype? VR, and VR presence, was supposed to change every facet of our lives — especially the lives of office workers. And yet: did Meta ever invest in moving their own workforce into their product? Did they make virtual office buildings where all the Meta employees sat around virtually typing at virtual desks with legless avatars?

No, Meta did not. Instead, Zuck insisted that physical presence at work was vital, and he ordered everyone back to the office.

Turning off Copilot suggestions

In the ensuing eighteen months I’ve used GitHub’s Copilot mostly as an advanced code-completion tool. In other words, like a typing assistant. My conclusions haven’t changed very much from my first impressions, except about a month ago I turned the code suggestions off. I felt like the hits were fewer and more frustrating than the misses.

One big reason I turned off the Copilot suggestions was that they often seemed to take the place of code completion offered by IntelliSense — a non-generative AI tool that nevertheless has a much better understanding of the code I am working with. I could count on IntelliSense helping me fill out a JavaScript “import” statement accurately, but Copilot would suggest packages to import that I did not have (or often never heard of). It’s possible that there’s a workaround for this, but after a certain point — and in the interest of removing a splinter — I just turned it off entirely.

By that point I had already turned it off entirely in CSS files because Copilot would often suggest old practices, bad practices, or vast swathes of design choices completely unrelated to the design I was working with.

In places where Copilot was offering code suggestions in areas where I felt less capable, I caught myself accepting the code uncritically without ever really questioning what was going on. In the past, I would have looked something up and learned something in the process. But falling into vibe coding made me worry that I was speeding up my own obsolescence.

The other problem I had with Copilot’s suggestions is it would often suggest things that weren’t quite what I had in mind. Sometimes the suggestions were as good as or better than what I was doing, but sometimes they were worse or wrong. Having these suggestions happen constantly kept feeling like an interruption. After several months, and an experiment with turning off the suggestions altogether, I realized that the creeping frustration and impatience I’d been experiencing at work was the result of having my concentration and attention constantly interrupted. It was the same frustration I’ve had with pair programming and open offices.

How others around me are using Copilot

When I asked my colleagues recently how they were using AI, I discovered many of them had turned off the Copilot suggestions as well. “I use it instead of Ruby documentation,” one of them said, “because Ruby documentation is awful.” Another highlights bits of code and asks Copilot what it’s doing, or sometimes has it suggest improvements to code and tests. I am experimenting with both of these things, but knowing how Copilot gets its data and seeing the atrocious CSS it repeats makes me more than a little leery.

It’s a weird situation. You need it most for coding tasks outside your usual domain — but that’s also the environment where you are least capable of judging and correcting the results. I’m still experimenting with it, though, but results are mixed.

An example. Struggling with coding GitHub Actions, I asked Copilot for help. It confidently gave me wrong answers that seemed right at the time and involved logic, not syntax errors. Since I felt less confident with Actions, I spent more time looking for errors in my own code and didn’t realize that Copilot was simply wrong until much later.

Generative AI is both easy and hard

There is a more-or-less canned response to “this AI keeps giving me wrong answers:” You have to learn how to ask it properly. There’s cognitive dissonance here. On the one hand AI is easy: you just ask it questions in a natural language, then it generates stuff for you. On the other hand, you have to carefully craft your prompts in a specific way, sometimes iterating over them, in order to get reasonable results.

I’m not sure the AI boosters realize this, but it feels a bit like a bait-and-switch. “Just talk to it normal. Also take my $300 class on how to write prompts.”

One developer I know who works a lot with developing AI assistants described a strategy by which multiple “agents” coordinate in the background, each specialized to a specific task, and sometimes offloading work to more traditional programming to get the correct answer. Building this kind of thing is non-trivial and requires a great deal of understanding about how the AI works. The end result, while better, is still not entirely reliable.

💁🏻‍♂️

A friend of mine has found a really useful purpose for generative image AI: creating paintings of Jesus smoking blunts. If that’s not a good use of resources, is anything?

It’s also basically a department full of experts, except none of them insist on health insurance.

The truth here is that AI is difficult to get a specific result out of, and the more reliable, repeatable, and complex the results need to be the more difficult the process becomes. AI is easiest and most cost-effective when you don’t care about the quality of the result. The moment you do care, things get a lot more complicated. That’s where another risk sits.

You are delegating (even if you don’t know it)

It requires a lot of compute power and — for the moment at least — reliance on third-party services that are often priced based on incremental amounts of usage.

Business products built on this technology strike me as particularly susceptible to business and pricing decisions outside a developer’s control. AI compute time is inexpensive now, but if and when people get locked into the technology the inevitable enshittification risks degrading many products all at once, putting business plans and probably even lives at risk. Building on AI at the moment feels like becoming an Instagram influencer, or maybe an Uber driver — you are entirely dependent on the whims of a larger company that is only helpful to you until they gain enough power to exploit you. The real value of AI is not how it makes programming more efficient, it’s how it puts compute power under contract once again.

The big names in the Generative AI space have either already established patterns of exploiting consumer dependence on their products or are bankrolled by companies that have established that pattern. We know already that their customer’s interests are not their primary concern. The fact that they threaten their staffs with replacement by AI makes arguments that AI is there to help, not replace, ring hollow.

I can’t shake the feeling that using GenAI for programming puts independent programming at risk in much the same way that the digital marketplace does an end-run around long-established ownership rights. People building products and businesses on top of AI services need to be aware that they are integrating companies with already established monopolistic aspirations much more tightly than they might expect.

Artifacts and process

I’m participating in this blogging exercise because writing is important to me as a thinking tool. The process helps me order my thoughts. Re-reading them again later helps me see where my thinking has changed or been confirmed. I could have asked AI to do it, but getting it to reflect my thoughts accurately would (for me) be much more difficult than just doing the writing. This is important not just for academic and creative writing but for business writing as well — the act of writing forces processing information in an orderly way, and if you delegate that work to an assistant or a robot, you don’t get the advantage of it.

This is especially true with note-taking. Having an AI take notes for you in a meeting is a bit like hiring someone else to lift weights for you. The notes get written, but the power of note-taking lies in using your head to reprocess and rephrase the information and your actual hand to mechanically write it down. (See also: Sönke Ahrens, How to Take Smart Notes).

I wouldn’t go so far as to say “it’s not the destination, it’s the journey” — I like getting where I want to be quite a bit. But most work benefits from the human effort involved doing it, not just the fact of it being done. This is true for many tasks that might seem mundane, tedious, or uncreative, and attempts to standardize or automate the process, with AI or otherwise, tends to harm the end product.


A year-and-a-half on, I’d have to describe myself as ambivalent, approaching hostile, to generative AI in its current form. It’s easy to get garbage out of, but difficult to get a quality artifact. As a work partner I’ve found it only marginally helpful. It risks extending the reach of monopolists, and robs the people using it of much of the value of doing work in the first place. Most of the value proposition of AI comes not from augmenting human effort — it’s not there yet — but replacing bothersome humans who might inconveniently care more about their own work than the people who ask them to do it.

But I still feel like I need to engage with it, because maybe I am wrong. But also, all of us have to live and function in the world as it is, not the world that we wish it to be. I literally cannot afford to shun generative AI, as much as I would like to.