• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I tried to get one to write an interface to a simple API, and gave it a link to the documentation. Mostly because it was actually really good documentation for a change. About half a dozen end points.

    It did. A few tweaks here and there and it even compiled.

    But it was not for the API I gave it. Wouldn’t tell me which API it was for either. I guess neither of us will ever know.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I’ve actually used chat GPT (or was it Cursor? I dont remember now) to help write a script for a program with a very (to me, a non-programmer) convoluted, but decently well documented API.

      it only got a few things right, but the key was that it got enough right for me to go and fix the rest. this was for a task I’d been trying to do every now and then for a few years. was nice to finally have it done.

      but damn, does “AI” ever suck at writing the code I want it to. or maybe I just suck at giving prompts. idk. one of my bosses uses it quite a bit to program stuff, and he claims to be quite successful with it. however, I know that he barely validates the result before claiming success, so… “look at this output!” — “okay, but do those numbers mean anything?” — “idk, but look at it! it’s gotta be close!”

  • burntbacon@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I would trust an ‘ai’ that had been designed from the ground up to do well in the stock market, just like I would trust an ‘ai’ that’s been designed from the ground up to drive trains. Idiots who think an llm is an ai in anything but spitting out what seems like reasonable answers/responses to your inputs are, well, idiots.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Even then, and as I wrote in another post, a custom trading NN might be working a strategy which is fine under normal market conditions whilst leading to massive losses if those conditions change (i.e. “picking nickels in front of a steamroller”) and because of the black-box nature of how Neural Networks work and their tendency to end up with the outputs being very convoluted derivations of the inputs (I expect even more so in Markets, were the obvious strategies that humans can easilly spot have long been arbitraged away, so any patterns such an NN spots during training will be so convoluted as to not be detectable by most humans), nobody will spot the risky nature of that strategy until getting splattered.

      Neural Networks working in predicting market movements are, unlike a predictive text keyboard or even an automated train driver, not operating in a straightforward mainly non-adversarial enviroment.

      • KeenFlame@feddit.nu
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Haha infact on the microsecond level that the bots work they employ dynamic tactics, just to achieve a goal and at their disposal they have the leverage of your entire pension. And your whole bank account. Yes. They bet your money. Right now. On millisecond trades. Continuously.

        The funny thing is that it DOES fail, sometimes they get caught in the steamroller, and then they halt the market and just prevent anyone from trading and go in to change all the daily lending deals to not have to deal with a bank run.

        Because the rules are for me and you, not for the gamer hedge fund bros. They are not bitches for no reason, they have to do this for work

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 days ago

      … you know that goldfish, randomly swimming to one side or another of a fish tank…

      … you know they perform better at picking stocks that will go up or down in the next quarter than nearly all professional hedge fund managers, right?

      In fact, this old expiriment was rerun fairly recently… ironically, with an AI being used to simulate a goldfish, in a scenario similar to that old study from some decades back.

      https://www.reddit.com/r/wallstreetbets/comments/tts0a4/some_theories_on_how_the_goldfish_was_able_to/

      The goldfish outperformed both WSB… and the Nasdaq.

      I am literally not even joking when I tell you that a goldfish will probably outperform an AI at at least fairly short term stock picking.


      See, there is a fundamental problem to predicting the market.

      You have to have a strategy by which you do this.

      If you employ this strategy… people will reverse engineer it and figure out how it works.

      Then, everyone does that strategy.

      Then, the strategy does not work any more, ‘nonsense’ begins to happen.

      If you are curious about the mechanics that cover that whole, meta sort of process, look into game theory under conditions of imperfect information and information assymetry.

      Its… basically a robust mathematical approach to simulating the flux of ‘animal spirits’ within a market… or in modern vernacular, ‘vibes’.

    • Dave@lemmy.nz
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I still wouldn’t, because the stock market is already full of algorithmic trading and so you’d have to believe yours was better than the big boys out there.

    • BB84@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      LLMs can help with trading. As an example, if you can read news articles 1000x faster than a human, then you can make appropriate market decisions that much faster and make profit off that. These need not be very intelligent market decisions. Any idiot and every LLM knows perfectly well what stocks to buy or sell when there is an announcement for tariff on product xyz.

      In case you didn’t know, DeepSeek was made by a trading company.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      I would trust AI to beat money managers in the stock market because it was proved a chimp throwing darts beats experienced money managers.

      Driving trains requires skill.

    • SkyezOpen@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      Yup. Machine learning is great. Using a predictive text keyboard with a large training set for EVERYTHING is not great.

    • Richard@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      true; could only get “AI” to do useful stuff when i gave it specialized knowledge on the topic i wanted it to help me with; if i asked outside this given scope information would go to shit tho.

  • ieatpwns@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Lmfao I saw someone on the train just last week asking chat gpt how they can turn a profit from all their Friday morning losses

    Off of Robinhood screenshots too

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I just use AI for projected profits and losses, and determine earnings schedule and report. I also trade in international markets and I have used AI as well. And like a lucky gold miner prospecting, AI helped me with finding good leads in the international market.

    But of course, in spite all that, you have to have due diligence. You still have to verify if what the AI is saying is correct.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I don’t remember the institution, but I remember reading a paper on a simulated trading environment with several ai agents who didn’t know about eachother. The LLMs were pretty conservative with profits and deliberately bought and sold in predictable ways. They all ended up “colluding” with eachother by deliberately not competing.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    In all fairness, it would be some kind of custom Neural Network designed to try and predict market movements (having been trained with past market data as well as things like counts of specific words in news articles and social media posts within a certain time frame) rather than an LLM.

    Neural Networks are pretty good at spotting patterns in masses of data which people can’t easilly spot.

    Of course, there must be a pattern there which doesn’t change much over time of certain things happening with more probability after certain other combinations of things, for it to actually beat the market, plus it also massivelly depends on the inputs it’s formatted to take (which a human is deciding rather than the NN itself, though maybe the technique used in LLMs of having huge dimensionality in terms of inputs and internal layers might work well there so that it can take “everything but the kitchen sink” as inputs).

    And then, there is of course the “small” risk that it might work fine for months/years under normal market conditions at doing what is essentially “picking nickles in front of a steamroller” - i.e. making low value gains in a nice reliable away for as long as normal market conditions are happening, but when conditions change getting totally splattered - whilst because of the whole black-box nature of NNs the humans don’t recognize the convoluted technique it has converge to use through training, as that kind of risky strategy.

    That said, unlike an LLM at least a custom NN wouldn’t come up with a “you’re so right” excuse when the human tells it of the massive losses it incurred.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Trading firms have been using ML and Neural Nets for trading and investment insight for ages before the current LLM “AI” boom started. I knew someone working in that space on investment derivatives in the mid 2010s.

      You don’t really need to speculate on it. It’s old news. This is just a joke about how there’s a new crop of suckers who are absolutely using LLMs for stock advice.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        Makes sense.

        I left the Finance Industry at about the time when ML in machine trading was just starting to be thought about and never got involved in it (or even Machine Trading) so I wasn’t sure it was happening, but knowing what I know of the industry it makes total sense that they would at least try it out since they have tons of in-house developers and can afford to pay a lot for domain-relavant expertise.

        PS: Also for example things like Neural Networks have been in used since the 90s in other domains and Finance seems to take around a decade or decade and a half to catch up to Tech in terms of Software.

    • benni@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      It’s true that NNs are strong at spotting patterns in masses of data, but trading is a particularly hard problem for this kind of task because the market constantly adapts to its participants. If other traders have found a pattern, it will already be priced in when you try to make money off it, and your strategy will fail. And since trading is a worldwide competition with billions of dollars to be won, you are naturally competing against teams of the best of the best who are willing to put massive resources into their algorithm development, computing, and data acquisition. Therefore the chances for someone like us to find an algorithm that systematically beats them is very low.

      So for any young math/CS nerd who comes across this thread and wants to try their luck, be aware of the difficulty before you invest any real money, and learn about the merits of passive investing.

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    You’re absolutely right. I’ve now read your CSV data, and made new trade recommendations.

  • khepri@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Here’s a crazy thought, the massive firms who have been trading programmatically using ML since it’s very earliest adoption, are simply going to program their shit to eat chatGPTs lunch. I have no doubt whatsoever that these desks are thrilled by the number of people predictably using public LLMs to choose trades, such a fresh new dataset of the newest, smoothest brains for them to exploit.

  • Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.

    No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.

    • Corridor8031@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      ML and other algorithms have also been used by firms who do none automatic trading like since before 2000 too

      but they just use it as part of the decision making process

      last time i looked, around like 2020, all the fully automated headfunds that said they will be use ai for trading, failed and did not beat the market tho

      (high frequency trading is not what i mean tho, i think high frequency trading is what you mean)

    • thespcicifcocean@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.

        • thespcicifcocean@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.

          • KeenFlame@feddit.nu
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            ? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”

            • thespcicifcocean@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              10 days ago

              when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.

              and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model

      • Bazell@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.

        • chonglibloodsport@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          Do you have an example of some games that use small neural networks for their NPC AIs? I was under the impression that most video game AIs used expert systems, at least for built-in ones.

          • Bazell@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            10 days ago

            Well, for what I know, modern chess engines are relatevly small AI models that usually work by taking on input the current state of the board and then predicting the next best move. Like Stockfish. Also, there is a game called Supreme Commander 2, where it is confirmed of usage small neural models to run NPC. And, as a person that somewhat included in game development, I can say that indie game engine libgdx provides an included AI module that can be fine tuned to a needed level for running NPC decisions. And it can be scaled in any way you want.

            • Buddahriffic@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              10 days ago

              As I understand, chess AIs are more like brute force models that take the current board and generate a tree with all possible moves from that position, then iterating on those new positions up to a certain depth (which is what the depth of the engine refers to). And while I think some might use other algorithms to “score” each position and try to keep the search to the interesting branches, that could introduce bias that would make it miss some moves that look bad but actually set up a better position, though ultimately, they do need some way to compare between different ending positions if the depth doesn’t bring them to checkmate in all paths.

              So it chooses the most intelligent move it can find, but does it by essentially playing out every possible game, kinda like Dr Strange in Infinity War, except chess has a more finite set of states to search through.

              • Bazell@lemmy.zip
                link
                fedilink
                arrow-up
                0
                ·
                10 days ago

                Maybe. I haven’t studied modern chess engines so deeply. All I know that you either can use the brute force method that will calculate in recursion each possible move or train an AI model on existing brute force engines and it will simply guess the best possible move without actually recalculating each possible. Both scenarios work with each one having its own benefits and downsides.

                But all of this is said according to my knowledge which can be incomplete, so recommend to recheck this info.

          • Holytimes@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            10 days ago

            Black and white used machine learning If I recall absolutely a classic of a game highly recommend a play if you never have. Dota 2 has a machine learning based ai agent for its bots. Tho I’m unsure if those are actually in the standard game or not.

            Forza and a few other racing games though out the years have used ML to various degrees.

            And hello neighbor was a rather infamously bad indie game that used it.

            For a topical example arc raiders used machine learning to train its AI during development. Tho it doesn’t run on the live servers to keep updating it.

            For LLM examples where the wind meets is using small LLMs for its AI dialogue interactions. Which makes for very fun RP mini games.

            I’m sure there’s more examples but these are what I can think of and find off Google.

      • ThunderclapSasquatch@startrek.website
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        The best use I’ve gotten out of GPT is troubleshooting Rimworld mod list errors, often I’ll slap the error in and it’ll tell me exactly which mod is the issue, even when it can’t the info I get back narrows it down to 4 or 5 suspects

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      No one who does not wear their pants on their head uses an LLM to make trades

      LMMs are better than other methods at context and nuance for sentiment analysis. They can legitimately form part of trade generation.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Eh… Wdym. The algos that trade fight in the micro second level. They adapt to each other and never stop changing. It’s exactly the same problem. Do you think llm is a unique neural net ? They all work the same. When you try to sound like ml is not the same as llm or as if ml is neural nets you don’t help anyone understand any of those concepts because you don’t yourself

      • Credibly_Human@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        That is a crazy amount of nonsensical word salad to use to try to call someone else out for lacking understanding.

        I mean just the flawed idea that all trading algos are all neural nets, or that all neural nets are the same or that the rectangle of ML doesn’t include neural nets… These are all wildly erratic non sequiturs.

  • Honytawk@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    If a goldfish can trade and turn a profit, anything with a randomizer can do so.

    AI would be fine. Just as good as any full time trader.