(When I write, I’m usually not speaking for my employer, but I’m super-duper not speaking for my employer here!)
There are a lot of opinions about the ethics of using LLMs. I’ve been considering the arguments I see for a while, and writing down something is a great way to clarify one’s thinking. So, here I go!
- Environmental concerns
- Training data concerns
- Uses of LLMs
- Writing code
- Writing text
- Summarizing text
- Image generation
Environmental concerns
I think this has been pretty overblown, both in terms of the amount of electricity usage and water usage. (for cooling data centers) Here’s a good article showing that even the author’s heavy use of Claude Code for a day approximately equals the energy it takes to run a dishwasher once.
This doesn’t take into account the amount it takes to train models, which I haven’t seen a good estimate of. Honestly, solar panels are cheap enough that we should be building them as fast as we can anyway!
Training data concerns
Undoubtedly the biggest models have been trained on copyrighted data without authors’ consent. (You can see if your name shows up in the LibGen dataset here, and if so register to get a settlement from Anthropic) This is bad, and I do think companies should have to pay for using people’s words. I should probably feel more strongly about this, and I’m sure I would if I made a living writing stuff.
Uses of LLMs
The two big main ways I evaluate here are:
- Is it OK that LLMs can give wrong answers?
- Are you wasting people’s time?
Writing code
I would argue that using LLMs to write code (via an agent like Claude Code or just by asking an LLM individual questions) is probably their least problematic use. It is OK that an LLM will produce wrong code; that’s why you should have a good test suite! (although you better know what you’re doing security-wise…) And provided that you review the LLM-generated code before you send it out for review (like what the LLVM project is requiring), this seems pretty fine to me. (as a bonus, I would imagine much of the code the LLM was trained on was open-source to begin with)
Writing text
Hmm…it depends? I don’t think there’s really anything wrong with asking an LLM for suggestions to improve or reword your writing. (although I don’t do this!) But the more the LLM writes, the more the humanity is taken out, like that terrible Google “Dear Sydney” ad that ran during the last Olympics. And don’t get me started on writing a few sentences and asking the LLM to make it longer; that is the epitome of wasting the readers’ time! If you’re tempted to do this, just send the few sentences you have instead!
Summarizing text
The amount of fine this is is inversely proportional to how important the text is. LLMs make mistakes, and if it’s summarizing a promotional email it’s probably fine, but summarizing an important presentation or letter from a crush is pretty dangerous!
Image generation
Ehh, this seems not great to me because most of the time making an image involves a lot of human expression. I guess if you’re just making it for yourself, whatever, but otherwise it feels gross to me. (we had someone make an LLM image of Jesus to show at our church’s children’s service and I felt pretty revolted!)
Feel free to tell me why I’m wrong!