📝 Blame Claude

The debate about how much credit to give AI tools for their contributions to software development has been going on for a while now. I only recently, however, became aware of how academic publishers have been handling this issue in the context of research papers.

A number of academic publishers have policies stating that 1) use of AI tools must be disclosed, and 2) AI tools cannot be listed as authors. Here’s an example of the latter from Elsevier Books:

Authors should not list generative AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans…[1]

There are good arguments on both sides of the debate, but I tend to agree with the spirit of this policy and would extend it to software development as well.

If I submit code, I am responsible for that code. Full stop. The AI is not an author, it’s not a co-author, and it doesn’t get a byline. I don’t want to be able to say “well, the AI wrote that part, so it’s not really my fault if it has a bug.” If I use an AI tool to generate code, I am responsible for reviewing that code, making sure it meets the coding standards, and being able to defend and explain it to my teammates. If I vibe code a commit and send it off to be reviewed by my team, I am now shifting all of the responsibility for code quality onto them. That creates cognitive debt and burns trust.

At the same time, I think it’s important to be transparent about the use of AI tools. It’s not something to be ashamed of or hide, any more than I’d hide the fact that I used a code generator or a linter. These days it’s a good bet that AI is involved at some point, and I honestly may forget to point that out every time. Maybe I should do better in that area, but I don’t want to pedantic about it. I just want to make sure that when I do use AI, I’m taking responsibility for the output.

Used well, AI can clear space for better judgment. Used poorly, it can move more work onto others and/or create more cognitive debt.

Note: I usually have an AI look over my writing or even brainstorm the initial draft, but I didn’t do that this time. However, I will admit that Copilot was on in my editor and I spent a lot of time dismissing bad auto-complete suggestions before I finally turned it off. Okay, hopefully that’s enough transparency for this short post.

[1] https://libguides.tamusa.edu/AI_for_lit_research/journal_requirements