Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit.

I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

  • molave@reddthat.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I like to build up fictional settings. Not being limited to commissioning art/easy conceptualization without resorting to nicking images as-is from the internet is extremely useful.

  • leanleft@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    layman here.

    probably because…

    • it can sift through alot of garbage.
    • its easy to use. and not complicated to understand its value.
    • its useful. like a super search engine for idiots.
    • it can probably automate alot of jobs. also it can probably correct or coverup alot of gaping flaws that have existed for the last few decades.
    • there’s nothing else exciting going on right now.
    • it is an interesting and valuable tool. progress has hit a point at which it is hard to ignore the achievements.

    ** relating to LLMs/chatgpt types. snarky, opinionated, and somewhat speculative, subjective review!

  • mim@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I don’t think the comparison with crypto is fair.

    People are actually using these models in their daily lives.

    • hglman@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      People have actually used crypto to make payments. Crypto is valuable, but only when it’s widely adopted. Before you say something like “use a database,” you might take the time to understand what decentralized blockchains are accomplishing and namely removing a class of corruption from any information coordination tasks.

      • beatle@aussie.zone
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Why bother with the overhead of blockchain when users centralise on a handful of banks exchanges.

        • hglman@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Exchanges only exist to convert away from the crypto. If that’s the standard money, they don’t live. They aren’t the banks of the blockchain. They are the intersection of fiat banks and the blockchain.

          • beatle@aussie.zone
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Strongly disagree, some exchanges don’t even have fiat on-ramps.

            Blockchain is inefficient and pointless when users centralise on coinbase and binance.

  • Candid_Technology_66@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In various jobs, AI can do the less important and easier work for you, so you can focus on the more important work. For example, you’re doing some kind of research which needs a specific kind of data you have collected, but all of that data is cluttered and messy. AI can sort the data for you, so you can focus on your research instead of spending a lot of your time on sorting the data into something more understandable. Or in programming, AI can write the easy part of a program for you, and you do the harder and more important part, which saves you time.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    AI is nothing like cryptocurrency. Cryptocurrencies didn’t solve any problems. We already use digital currencies and they’re very convenient.

    AI has solved many problems we couldn’t solve before and it’s still new. I don’t doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.

    I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that’s just a new online tool in its infancy.

  • rustyricotta@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    As others have said, in it’s current state, it can be useful in the early stages of anything you do, such as brainstorming. ChatGPT (I have most experience with) and other LLM excel at organizing, formating, explaining, etc the information of the internet. In almost all cases (at the moment) whatever they spit out needs to be fact checked and refined.

    Just from personally dinking around with chatGPT a little, it does give you that “scarily good” feeling at first. You do start seeing it’s flaws after a while, and you get to learn that it’s quite fallible. The information it can spit out can be good for additional ideas and brainstorming.

    What I want it do (and it might already, if not soon) is that I when I program something up and for the life of me can’t find the cause of some bug, just be able to give it my entire code and my problem and see what’s deal.

  • demesisx@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes. What a strange question…as if hivemind fads are somehow relevant to the merits of a technology.

    There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn’t mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto…nothing changed about the technology. It’s just that the public has had their opinions shifted temporarily.

  • It’s really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.

    So if you need gaps filled, things rearranged, data aggregated or patterns found: AI is useful.

    And that’s just what this one, dumb guy knows. Someone smarter can probably provide way more uses.

    • tara@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Hi academic here,

      I research AI - better referred to as Machine Learning (ML) since it does away with the hype and more accurately describes what’s happening - and I can provide an overview of the three main types:

      1. Supervised Learning: Predicting the correct output for an input. Trained from known examples. E.g: “Here are 500 correctly labelled pictures of cats and dogs, now tell me if this picture is a cat or a dog?”. Other examples include facial recognition and numeric prediction tasks, like predicting today’s expected profit or stock price based on historic data.

      2. Unsupervised Learning: Identifying patterns and structures in data. Trained on unlabelled data. E.g: “Here are a bunch of customer profiles, group them by similarity however makes most sense to you”. This can be used for targeted advertising. Another example is generative AI such as ChatGPT or DALLE: “Here’s a bunch of prompt-responses/captioned-images, identify the underlying way of creating the response/image from the prompt/image.

      3. Reinforcement Learning: Decision making to maximise a reward signal. Trained through trial and error. E.g: “Control this robot to stand where I want, the reward is negative every second you’re not there, and very negative whenever you fall over. A positive reward is given whilst you are in the target location.” Other examples including playing board games or video games, or selecting content for people to watch/read/look-at to maximise their time spent using an app.

        • tara@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          So typically there are 4 main competing interpretations of what AI is:

          1. Acting like a human
          2. Thinking like a human
          3. Acting rationally
          4. Thinking rationally

          These are from Norvig’s “AI: A Modern Approach”.

          Alan Turing’s “Turing Test” tests whether a given agent is artificially intelligent (according to definition #1). The test involves a human conversing with the agent via text messages, and deciding whether the agent is human or not. Large language models, a form of machine learning, can produce chatbot agents which pass this test. Instances of GPT4 prompted sufficiently to text an assessor for example. The assessor occasionally interacts with humans so they are kept sufficiently uncertain.

          By this point, I think that machine learning in the form of an LLM can achieve artificial intelligence according to definition #1, but that isn’t what most non-tech non-academic people mean by AI.

          The mainstream definition of AI is what we would call Artificial General Intelligence (AGI). This is an agent that meets a given one of Norvig’s criteria for AI across multiple scenarios and situations that they have never encountered before.

          Many would argue that LLMs like GPT4 do not meet the criteria for AGI because they are not general enough, unable to learn to play an Atari game for example, or to learn an entirely unseen language to fluency.

          This is the difference between an LLM and a fictional AGI like Glados or Skynet.

          Additionally forms of machine learning exist like k-means clustering, which identify related groups within a dataset as their only function. I would assert these are not AI, although a weak argument could be made that they are thinking “rationally” enough to meet definition #4.

          Then there are forms of AI which are not machine learning, such as heuristic agents - agents that are hard coding with reasoning by humans - such as the chess playing Stockfish, or the AI found in most video games.

          Ultimately AI can describe machine learning if “AI” is understood as something which meets one or more of Norvig’s definitions. But since most people say AI when they mean AGI, I think “machine learning” is a better term. Less undeserved hype, less marketing disinformation, and generally better at communicating what is being talked about.

  • nickwitha_k (he/him)@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn’t exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.

    I think it’s also extremely disingenuous for Large Language Models to be billed as “AI”. They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel “ideas” - everything that they output is built from a statistical model of existing data.

    If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it’s little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.

    • unknowing8343@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      You have not realised yet that… yes, it has all the right to be called AI. They are doing the same thing we do. Learn and then create thoughts based on those learnings.

      I even asked them to make up words that are not related to any language, and they create them, entirely new, never-used words, that are not even composites of others. These are creative machines. They might fail at answering some questions, but that is partially why we call it Artificial Intelligence. It’s not saying that it is a machine of truth. Just a machine that “learns” and “knows”. Sometimes correctly, sometimes wrong. Just like us.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Incorrect. An LLM COULD be a part of a system that implements AI but, itself, possesses no intelligence. Claiming otherwise is akin to claiming that the Pythagorean theorem is an AI because it “understands” geometry. Neither actually understands the data that they are fed but, are good at producing results that make it seem that way.

        Human cognition does not work that way; it is much more complex and squishy. Association of current experiences with remembered experiences is only a fraction of what is going on in a brain related to cognition.

        • unknowing8343@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I am not saying it works exactly like humans inside of the black box. I just say it works. It learns and then creates thoughts. And it works.

          You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

          All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say “papa”, until it somehow works, and now the pathway is setup in the brain.

          Obviously ChatGPT is just dealing with text. But does it make it NOT intelligent? I think it makes it very text-intelligent. Just add together all the AI pieces we are building and you got yourself a general AI that will do anything we do.

          Yeah, maybe it does not work like our brain. But is a human brain structure the only possible structure for intelligence? I don’t think so.

          • Alex@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            If you consider the amount of text an LLM has to consume to replicate something approaching human like language you have to appreciate there is something else going on with our cognition. LLM’s give responses that make statistical sense but humans can actually understand why one arrangement of words might not make sense over the other.

            • unknowing8343@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yes, it’s inefficient… and OpenAI and Google are losing exactly because of that.

              There’s open source models already out there that are rivaling ChatGPT and that you can train on your 10 year-old laptop in a day.

              And this is just the beggining.

              Also… maybe we should check how many words of exposure a kid gets throughout their life to get to the point to develop arguments such as ChatGPT’s… because the thing is that… ChatGPT does know way more about many things than any human being will ever do. Like, easily thousands of times more.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                And this is just the beggining.

                Absolutely agreed, so long as protections are put in place to defang it as a weapon against labor (if few have leisure time or income to support tech development, I see great danger of stagnation). LLMs do clearly seem an important part in advancing towards real AI.