This might also be an automatic response to prevent discussion. Although I’m not sure since it’s MS’ AI.

  • plz1@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    I get Copilot to bail on conversations so often like your example that I’m only using it for help with programming/code snippets at this point. The moment you question accuracy, bam, chat’s over.

    I asked if there was a Copilot extension for VS Code, and it said yup, talked about how to install it, and even configure it. That was completely fabricated, and as soon as I asked for more detail to prove it was real, chat’s over.

    • DetectiveSanity@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      That would force them to reveal it’s sources (unconsented scraping) hence make them liable for any potential lawsuits. As such they would need to withdraw from revealing sources

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I think the LLM won here. If you’re being accusational and outright saying its previous statement is a lie, you’ve already made up your mind. The chatbot knows it can’t change your mind, so it suggests changing the topic.

    It’s not a spokesperson/bot for Microsoft, not a lawyer. So it knows when it should shut itself off.

  • eveninghere@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    This is actually an unfair experiment. This behavior is not specific to questions about MS. Copilot is simply incapable of this type of discussion.

    Copilot tends to just paraphrase text it read, and when I challenge the content, it ends the conversation like this, instead of engaging in a meaningful dialogue.