WotC to update artist guidelines moving forward.

  • KurtDunniehue
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    I don’t think those advancements were categorically good, or were the morally correct things to occur. I won’t go through them all, but just because something has happened, doesn’t mean it was inevitable, or that it was a good thing to have happened and the world is better for it.

    But putting that aside, the clearest difference that I see between those advancements and Machine Learning (A subset of Artificial Intelligence research), is that Machine Learning always takes datasets to train the system. As a result, the Machine Learning Generation truly isn’t coming up with something new, it is just repackaging the work of other people. This is further morally fraught, as you have made a system with the aim to make the work of people irrelevant, while using their own work to do so without their consent.

    And as to your proposition that artists shouldn’t have to make money to live, I agree wholeheartedly. But this technology isn’t going to lead to that future. It is currently being used by people with means to make more money by cutting out the people who would have to be paid to make creative works. Machine Learning already did this with language translators.

    When Google Translate was getting somewhat good in the early 2000’s, many companies fired their foreign language translators. What they discovered quickly is that the technology wasn’t quite there yet, so they had to hire them back. But by and large, they didn’t hire them back as translators, but as editors, who would clean up the bad translations from Machine Learning language translation software. We’re currently on the same trajectory with this technology for a wide swath of creatives.

    This is bad for right now, the foreseeable future. I do not foresee a future where we are freed from needing to exchange a majority our waking-lives for money, and this technology will only perpetuate that reality.

    That all sounds really dramatic and escalating

    And yes I do believe you’re being rather dramatic by implying that I’m a luddite who doesn’t want technology to work at all. I want technology to work for people, not the other way around. I want the Jetsons future, where people work a minority of their lives, not the majority, where we can focus on quality of life over vainglorious pursuits that ultimately benefit the idle rich. The trajectory of this technology will ultimately only benefit those who don’t need to work to live.

    • KoboldOfArtifice
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      11 months ago

      I find it unrealistic to not believe that a future in which we must not necessarily work a large part of our time to gain a living is possible, but believe that somehow you can halt the progress of technology. Instead of directing the outrage against your government and making them acknowledge that before too long automation will make any form of large scale employed a mere farce, you’d rather hope that corporations would be nice and not employ AI generated artwork to make their images.

      This technology is on its way to benefit everyone unless you allow it to be monopolised by those who wish to do so. AI empowers their users to do what could have only been done by many people together before, for prizes that are relatively negligible. The fact that AI uses other people’s art as input doesn’t mean it just repackages other people’s content. That’s the same as saying that normal artists are just repackaging other people’s content because they learned by looking at the art of other people and seeing what works.

      The learning process employed in Machine Learning and the learning process that humans engage in is not fundamentally different. While it might not be the exact same set of mechanisms, in the end it just boils down to seeing what works and making things similar to that. That’s what humans already do.

      The model trained on all the data, if well trained, does not contain the data to reconstruct any of the images used to make it

      Something that may be relevent is the application of completely self learned models. If an AI were to be able to learn making art without using human art (just human input on the quality and tagging of created pieces) would you feel better about that replacing artists? Because that is certainly something we will see in the future too. Back in the day when AI started beating the best Go players in the world the critique had been that it hasn’t surpassed humans in skill, as it has learned from humans to be so good. So they made a version that only learned by playing against itself with no human input at all. There’s nothing stopping art to be created in a similar way, as long humans give the input on what they like and what they don’t.