• elshandra@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    If my ai bot is as exaggerated, fake and dense as so many youtubers seem to be these days. I think it will find itself without communication components in a very short time.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    Can’t wait for this to backfire because those same companies might end up training on data created by using their AI. Can’t wait for the most popular videos you see when you’re not logged in to become AI generated videos that are so bad that they are literal nonsense that makes no sense, causing it to start punishing any videos that make sense.

    I have a feeling we’re gonna go into a content death spiral on the platform, worse than anything we’ve ever seen before. Only because the largest channels will end up abusing it with reckless abandon.

  • FarraigePlaisteach@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    If they’re scraping the web, and they’re generating AI content on the web, how do they avoiding training their AI on it’s own nonsense somewhere?

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    6
    ·
    7 months ago

    We already know they used all the public information on the Internet. How is this news? If AI is going to be any use, it needs to learn from somewhere.

    • circuitfarmer@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      People have been used to a lot of private services for a while now. YouTube is so ubiquitous it’s almost like a utility, in that everyone always has access to it and it’s just everywhere, with no real competitor.

      But all of these social media services are private, so as much as they feel like public information utilities, once you’re on one, your data isn’t your own. I think that’s the disconnect when people hear that “their data” has been used for AI training. It ceased to be their data as soon as it went on the platform, at least tacitly in the US.

      There has traditionally been a public expectation of control that simply isn’t there for any of these services. The industry knows this and capitalizes on it regularly. It’s a key tenet of technofeudalism.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Yeah, I’ve had this loop-de-loop conversation with a few people now:
      “Are you against AI in principle?”
      “No, they just shouldn’t use copyrighted material!”
      “But you want them to be very similar to a human?”
      “Yes”
      “Have you ever talked to someone who’s never seen anything copywritten?”