Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on copyrighted material without the original artists’ permission. And that’s without getting into AI’s negative drag on the environment.

  • deepblueseas@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    8 months ago

    It’s clear where you hold your stakes in the matter and where I hold mine. Whether or not you want to continue the conversation is up to you, but I’m not going to go out my way to be polite in the matter, because I don’t really give a shit either way or if you’re offended by what I say. AI personally affects me and my livelihood, so I do have passionate opinions about its use, how companies are adapting it and how it’s affecting other people like me.

    All the article you linked shows is that they held a meeting, which doesn’t really show anything. The government has tons of meetings that don’t amount to shit.

    So, instead of arguing whether or not the meeting actually shows they are considering anything different, I will explain my personal views.

    In general, I’m not against AI. It is a tool that can be effective in reducing menial tasks and increasing productivity. But, historically, such technology has done nothing but put people out of work and increased profits for the executives and shareholders. At every given chance, creatives and their work are devalued and categorized as “easy”, “childish” or not a real form of work, by those who do not have the capacity to do it themselves.

    If a company wants to adapt AI technology for creative use, it should solely trained off of content that they own the copyright to. Most AI models are completely opaque and refuse to disclose the materials they were trained on. Unless they can show me exactly what images were used to generate the output, then I will not trust that the output is unique and not plagiarizing other works.

    Fair use has very specific use cases where it’s actually allowed - parody, satire, documentary and educational use, etc. For common people, you can be DMCA’ed or targeted in other ways for even small offenses, like remixes. Even sites like archive.org are constantly under threat by lawsuits. In comparison, AI companies are seemingly being given free pass because of wide adoption, their lack of transparency, and the vagueness as to where specifically the output is being derived from. A lot of AI companies are trying to adapt opt-out to cover their asses, but this is only making our perception of their scraping practices worse.

    As we are starting to see with some journalism lawsuits, they are able to specifically point out where their work is being plagiarized, so I hope that more artists will speak up and also file suit for models where their work is blatantly being trained to mimic their styles. Because If someone can file copyright suit against another person for such matters, they should certainly be able to sue a company for the same unauthorized use of their work, when being used for profit.