• 0 Posts
  • 106 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle

  • Your description is how pre-llm chatbots work

    Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.

    Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.

    Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

    Emergent properties are literally the only reason llms work at all.

    No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.


  • No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.



  • If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”

    It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.

    And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.


  • AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.

    Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.


  • Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.

    It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.

    I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.













  • It would work the way the internet worked before google and facebook monetised monitoring everyone to sell ads

    You mean the ads on the side of the screen that told you to play some interactive game in them so they could install malware? Ads of some form were always a thing on the internet, first in forum posts then to website ads then Google started essentially buying ad space on other websites, and paid you for it. I hate Google but when that first came out at least most ads weren’t filled with malware at that point.


  • The problem isn’t the funding it’s people’s reactions. Why slave away for someone else’s company even if it provides utility for your society if you can survive and even thrive creatively on UBI? What happens then, do we get worse class warefare then we have now? What happens when people realize most of what can be automated away at current levels are executive and CEO positions? When they leave with Golden parachutes are you gonna ask for UBI for them? No then we have set a precedent legally for those automated away jobs to not receive UBI or you just facilitated more capitalistic greed for those executives. Is UBI setup on a global scale? No then how do you enforce dual citizenship individuals from collecting UBI and working another job remotely from the second nation they are registered with creating inefficiencies in our program with could make it a target for regressive policies. Think Republicans constantly saying illegals are stealing our benefits so we should block them and cut funding to the programs, how do we defend against those attacks? I mean I can keep going, but the problem is how do we implement this without everything being automated and create a fair and equitable system for all involved? While it would be nice to just throw money at everyone you need to take into account individuals reactions to this. We aren’t in a vacuum and yet we isolate ourselves in echo chambers as if our perspectives are the only ones out there, we loose nuance by doing this and then get aggravated something isn’t done because the cause of that nuance isn’t even on our radar from lacking communication with other people who have differing views and opinions.