In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm.

When asked if it had used insider trading, it denied the fact.

Insider trading refers to when confidential company information is used to make trading decisions.

Firms and individuals are only allowed to use publicly-available information when buying or selling stocks.

The demonstration was given by members of the government’s Frontier AI Taskforce, which researches the potential risks of AI.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    44
    arrow-down
    2
    ·
    1 year ago

    This is entirely predictable and expected if you know how LLMs work. For anyone else: If you feed information in, it will be used. If you ask if it’s done something it’s been specifically instructed not to do, it will say it didn’t do the thing because a) doing the thing is wrong and it wouldn’t have done the wrong thing and b) it literally has no idea how it generated it’s own output so it can’t actually answer the question in any meaningful way.

    • Square Singer@feddit.de
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Totally right.

      In it’s database it knows that the answer that is given in the most source texts to the question “Did you do something illegal?” is “No”. And that is what it’s replicating.

      If the database mostly contained confessions of criminals it would answer “Yes”.

      But in either case it would not be related to whether it had done it or not, but to which answer appears more commonly to that (or a similar) question in the training data.

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        No, you guys are very wrong both of you, this is not at all what happens, unless you wipe the context or use system prompts to specifically ask for that behavior. Even free open source models know how to use context, and for memory it’s more complicated. For this brutally idiotic use case they presented, they would save all trades and chats, but then not give it access to it and tell it to always appear lawful and honest