Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.
Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.
Can’t reply directly to @OldGreyTroll@kbin.social because of that “language” bug, but:
The problem is that they then sell the notes in that database for giant piles of cash. Props to you if you’re profiting off your research the way OpenAI can profit off its model.
But yes, the lack of meat is an issue. If I read that article right, it’s not the one being contested here though. (IANAL and this is the only article I’ve read on this particular suit, so I may be wrong).
Was also going to reply to them!
"Well if you do that you source and reference. AIs do not do that, by design can’t.
So it’s more like you summarized a bunch of books. Pass it of as your own research. Then publish and sell that.
I’m pretty sure the authors of the books you used would be pissed."
Again cannot reply to kbin users.
“I don’t have a problem with the summarized part ^^ What is not present for a AI is that it cannot credit or reference. And that is makes up credits and references if asked to do so.” @bioemerl@kbin.social
Good point, attribution is a non-trivial part of it.
It is 100% legal and common to sell summaries of books to people. That’s what a reviewer does. That’s what Wikipedia does in the plot section of literally every Wikipedia page about every book.
This is also ignoring the fact that Chat GPT is a hell of a lot more than a bunch of summaries
@owf@kbin.social can’t reply directly to you either, same language bug between lemmy and kbin.
That’s a great way to put it.
Frankly idc if it’s “technically legal,” it’s fucking slimy and desperately short-term. The aforementioned chuckleheads will doom our collective creativity for their own immediate gain if they’re not stopped.
On top of that, they have no way of generating any notes without your input.
I believe the way these models work is fundamentally plagiaristic. It’s an “average of its inputs” situation, not a “greater than the sum of its parts” one.
GitHub Copilot doesn’t know how to code, it knows how to copy-and-paste from people who do. It’s useless without a million devs to crib off.
I think it’s a perfectly reasonable reaction to be rather upset when some Silicon Valley chuckleheads help themselves to your lfe’s work in order to build a bot to replace you.