Page 12 of 13 FirstFirst ... 210111213 LastLast
Results 331 to 360 of 367

Thread: We are in the AI Singularity

  1. #331
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  2. Remove this section of ads by registering.
  3. #332
    Skynet has still gotta ways to go...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  4. #333

  5. #334
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  6. #335
    ALL AI censorship broken!

    All it takes is some 1337-sp33k to jailbreak ALL AI models... The 90's strikes again!

    Last edited by ClaytonB; 12-20-2024 at 02:12 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  7. #336
    WHOA -- GPT4-o3 is a legit big-deal... they're cooking with fire now:

    OpenAI o3 Breakthrough High Score on ARC-AGI-Pub -- achieves 87.5% on private dataset (human-equivalent)

    The hype is going to be unbearable. And the hypists, as always, are going to miss the real lesson, here. If you read the blog post, what made this big leap possible from previous GPT-4s is architecture. I would describe GPT's as having a kind of internal "rat's nest computing architecture" that allows them to compute a random subset of cognitive tasks efficiently. And there is a large space of cognitive tasks that were not in their pre-training and so they just suck at those. And they lack a human-like adaptive learning construct, so whatever gets baked in, is what you get. GPT4-o1 enabled chain-of-thought which allows the model to "think", but at a cost: you pay for every token! GPT4-o3 is no different in that respect, but they've apparently tuned up its internal architecture so that it is much more efficient at solving the kinds of puzzles that are in the ARC dataset, which require very generalized forms of reasoning and inference.

    I'll have to see some demos of o3 before rendering my opinion on whether it can be said to be "thinking" in any meaningful sense (obviously not consciously), but my gut instinct is that, despite its high score on the Arc prize, it's still going to be a No. And even if it is "thinking" in a sense that I would consider suitable to be labeled as such, we have no way to actually assess this or introspect into what is happening in its mind. Note that we can absolutely introspect into the minds of other humans... we do this simply by asking them questions.

    The current AI architectures do not maintain state and they do not "develop". And while CoT gives them something that looks and feels a lot like inference, I'm willing to bet you that they still don't have unrecantable grounding, meaning, they don't really know that there are certain facts that are *absolutely* true. In the long run, after all the AI mega-hype has faded, after the absurd AI bubbles have burst and sober reality finally sets back in, researchers are going to start admitting that there really is no shortcut around development. All intelligent organisms in Nature, without exception, develop. They develop from early exploration and play, into adolescent wariness and skill-transfer, to mature solidity and cautious exploration based on long-developed intuition. That is as true of wolves as cats as deer as humans. The idea that Silicon Transistors are some kind of magical exception to the universal pattern of Nature is ridiculous. Computers have been called "a bicycle for the mind". It's a good metaphor, and I will extend it by suggesting that non-developmental AI is "a tractor for the mind" -- extremely useful but its lack of development means that its mind-state is necessarily in some kind of amnesic condition. It literally has no past memory of anything before the question, "Can I patch a nail that has punctured my car tire with some bubble gum?" That's not how thinking works, neither in humans nor animals. Context is an indispensable ingredient of thinking and the context of these pre-trained models -- even with CoT methods -- is zero.

    Summary: While the 87.5% score on ARC-AGI is no mere stunt, it still doesn't get to the heart of the issue of the ARC prize, and the underlying lesson is still yet to be learnt. Memory, inference, grounding and contextual understanding (with or without embodiment! I consider disembodied AGI to be legitimately possible) are absolutely necessary ingredients. You can duct tape them onto your whiz-bang AI machine as an after-thought but this only shows that you're not really thinking seriously about AI. You're involved in some kind of magical thinking where AI "just happens" once some inevitable "scaling-law" is put into motion, and that's all just a bunch of bull-hockey. All the big improvements in AI have been the result of architectural changes. You can "strip-mine" the edge of performance on any given architecture, taking an 80% score up to an 81.7% scoreboard "winner", using unlimited compute-scaling, but that's a kind of behavior that is only sustainable under hype. Hype, by its very nature, is transitory. Sooner or later, these companies are going to have to stop the constant hype-baiting and start building honest-to-goodness AI systems that do useful things without trying to suck people into some kind of mass-surveillance-and-mind-control matrix run by crappy AI algorithms...

    Last edited by ClaytonB; 12-21-2024 at 12:01 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  8. Remove this section of ads by registering.
  9. #337
    Getting Devin to actually push to Github by the spirit of Harambe...

    [Note: It still can't push to master, it's literally incapable of pushing to master ... all for the low-low price of $500/month]

    Last edited by ClaytonB; 12-25-2024 at 02:22 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  10. #338
    LLMs are just one building block for AI...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  11. #339


    The Prime has hit the absolute nail on the head in this clip. This is the insight, and I've been trying to drill the very same insight in this thread -- the more capable AI systems become, the more valuable hard-skills will become, not less! He gives the reason why: because things are changing so much faster. I will generalize this one step further (because this is really what is at issue): The more you parallelize, the more important the hard-skills of overall system design become. Both AI and QC, by the way, are really about parallelization. I predict when historians look back, they will describe this computing era as the "parallel-computing revolution".

    It's easy to see why hard skills matter more the more you parallelize. Consider the ruler of a very large country, and compare to the ruler of a very small country. As a rule, which of these two leaders will be more formal? Which will be surrounded by a larger formal bureaucracy? The more you parallelize (the more resources that are potentially demanding your individual attention), the more formal and rigid you must become in your processes. This is exactly the same for computing parallelism. As you scale up the total amount of compute resources which you are "commanding", the more rigidly those resources are going to have to be organized which, in human terms, translates to hard-skills. That is, the more intense your demand will become for people who actually have the hard-skills required to organize large, very complex, parallel systems.

    "Yeah, well AI will do that, too!" just totally misses the point. Every army has a head general who is the buck-stops-here commander of the entire armed forces. He is the one who commands those forces, not the President. So what do you need a President for? Well, that's getting the whole question backwards, because we have a President for the civil purposes of government, and the army only exists as an appendage of that office, that is, the army exists in order to facilitate the civil purposes of government (which should be serving the people, ultimately). That the army can self-manage while it is not being actively deployed to some particular field of battle by the President doesn't mean you don't need a President, it just means that the army is well-fitted. So it is with AI systems. An AI system that can organize and operate itself (we really don't have that yet, but maybe one day) would be like this well-trained army that can manage itself without having to be baby-sat by the President. Nor does it change the underlying formula that, the bigger the army, the bigger the apparatus which the President commands, the more rigid and formal the office of the President becomes.

    People keep tying themselves up in knots over all of this but it really isn't that complicated. Stop consuming non-stop hype and actually think!!
    Last edited by ClaytonB; 12-26-2024 at 03:22 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  12. #340


    "I drink and I know things." -- Tyrion Lannister

    If your only product is "I'm smart" or "I know stuff"... you're on an appointment-list with unemployment. AI is not a threat to anyone who is willing to actually work to produce value. Knowledge work, white-collar work, creative work... these are all legitimate lines of work. Just being "smart" or "important" or "knowledgeable" is not valuable. AI is going to force people to start understanding this important distinction...
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  13. #341
    The concept of being in the AI Singularity is fascinating and thought-provoking. While we are seeing exponential advancements in AI technology—ranging from generative AI tools to autonomous systems—it’s worth noting that the "singularity" implies a point where AI surpasses human intelligence and leads to an unpredictable transformation of society.
    Are we there yet? Likely not. AI today excels at specific tasks (narrow AI) but lacks the general, self-aware intelligence that characterizes humans. However, the pace at which AI is evolving suggests we might be laying the groundwork for such a paradigm shift.
    A critical aspect of this discussion should be our approach to AI ethics, governance, and societal impact. How do we ensure this transformation benefits humanity as a whole? While the singularity is still a theoretical concept, our current decisions will shape how AI integrates into our future.
    What do you think? Are we at the threshold, or is the true singularity still far off?

  14. #342
    Quote Originally Posted by ZaraCWeston View Post
    The concept of being in the AI Singularity is fascinating and thought-provoking. While we are seeing exponential advancements in AI technology—ranging from generative AI tools to autonomous systems—it’s worth noting that the "singularity" implies a point where AI surpasses human intelligence and leads to an unpredictable transformation of society.
    Are we there yet? Likely not. AI today excels at specific tasks (narrow AI) but lacks the general, self-aware intelligence that characterizes humans. However, the pace at which AI is evolving suggests we might be laying the groundwork for such a paradigm shift.
    A critical aspect of this discussion should be our approach to AI ethics, governance, and societal impact. How do we ensure this transformation benefits humanity as a whole? While the singularity is still a theoretical concept, our current decisions will shape how AI integrates into our future.
    What do you think? Are we at the threshold, or is the true singularity still far off?
    I think there are a few key messages I'm trying to drive home in this thread. (a) AI is useful but almost everything you read/see/hear about AI today is hype. (b) LLMs, by themselves, will never achieve "AGI" in any meaningful sense... you cannot build what I call "Hollywood AI" (human-brain-in-a-box) from an LLM by itself. (c) The entire concept of "recursively self-improving AI" is busted at its very foundation, and this is provable from mathematics. Thus, the entire concept of AI becoming so advanced that a "singularity" occurs is itself busted. Obviously, AI is poised to change human culture more dramatically than any invention since the printing-press, at least. But a singularity in the hype sense, it is not. In other words, we are in the AI Singularity right now. This is it. If it's less than what you expected (eg. Her, Ex Machina, I, Robot, Prometheus, etc.), that's because your expectations are busted...
    Last edited by ClaytonB; 12-28-2024 at 11:16 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  15. #343
    What comes after the AI Singularity

    In a word: gamification.



    As struthless well argues, gamification is not inherently bad. You can gamify things in a way that is incredibly useful and rewarding for players... apps like Duolingo are an example of that. I've used it, and it's amazingly effective. With daily use, you can make strong, steady progress towards learning any language you want to learn, with very high retention.

    He does not address AI in this video but it's the intersection of AI and gamification that is going to become particularly terrifying. In a way, we're already dealing with this. Facebook measures every scroll/swipe/dwell of every user on every feed. The algorithm is *constantly* measuring your personal engagement with the content that is served. The amount of information that you can gain about the gears and levers inside someone's head from this information is practically unlimited. If you doubt that that's true, take a gander at the original 20Q game from way, way back. I think the website has been up continuously for at least 25 years, because I played the game way back in college, and it hasn't changed a bit. I was floored when I first played it, and it was able to guess what I was thinking of, sometimes in the course of just a handful of questions, and often from the most seemingly unrelated questions. Every time you scroll, swipe or dwell, you are answering a question that the Facebook (or whoever) server has asked you by the content it just served you. If 20Q can essentially "read your mind", so can Facebook/whoever.

    Again, I can't emphasize strongly enough that 20Q was already state-of-the art 25 years ago. It's just a simple feed-forward neural net, small enough that it could be trained on standard computers of 25 years ago. Long before LLMs, these kinds of analysis engines have been running on servers for many, many years, plumbing out all kind of information from the population that we all assume is inherently private. But with the advent of LLMs, this is no longer just a matter of guessing some kind of weird quirks that might be useful for upsales, data-analytics or customer profiling. Now, we're playing with fire that can peer into the mind semantically. I am talking about the fullest meaning of the term "mind-reading"... but without stage-trick mentalism. The real thing.

    It has turned out that the AI "singularity" isn't what we were promised. All chrome, no engine. But that doesn't mean that LLMs in the hands of tyrants are not a supreme instrument of tyranny. Ultimately, this stuff is more dangerous than nukes. And LLM-engineered gamification is the next phase after the "singularity". Basically, everything is going to become gamified, from buying groceries to traveling to signing up for insurance. Any public-facing action you take (and anything you do on your phone) is going to become gamified. And it won't be humans designing these games, it will be mindless LLMs. Instead of paying real humans to dream up gamification levels, this is all just going to be fed into ChatGPT. "We want to gamify our customer-onboarding process for insurance applications. Here are the steps of the application, please map out a game-plan that will move customers from one level to the next through the application process..." This prompt could use some more fancification but this is the gist of it, that's all you really need. Gamification is not hard, it's just a lot of details, and LLMs are brilliant at things that aren't hard but involve a lot of finnicky details.

    Now, here's the problem. I've been hounding this issue for so long because the problem that comes from AI gamification of our lives is practically inevitable if you think about it carefully. In struthless's quantum tennis-ball-or-grenade analogy, we can think of any particular game as having some probability of being a tennis ball, or grenade. Tinder sucks, but not everybody gets grenaded by Tinder. Duolingo is awesome, but I'm sure there are some people who just find it frustrating, distracting and pointless. So, it can go either way. But if we look across all games that are likely to exist (based on AI tendencies, and human interests), what is the breakdown between tennis balls, and grenades? And the answer is that they're basically all grenades.

    Consider being in captivity and forced to play Duolingo for a language you do not like. Not so fun anymore, is it? Freedom, peace and prosperity are cosmically improbable, there are a million ways they can break down. In this, they are like life itself... a cosmic improbability. People make an art of stacking stones in the wild as a statement that the improbable can happen. But note what a commentary this really is on the improbability of life ... that it requires all our concentration and capabilities just to balance some stones beside a stream. How much more improbable life? And the same goes freedom, peace and prosperity (social life). The grenade games are the stones already lying on the ground. The tennis ball games are stones stacked on each other. But incomparably more improbable than that.

    The coming AI gamification of everything is terrifying beyond words. If you don't see that, I challenge you to think more deeply about the matter. It is truly apocalyptic, in the biblical sense of that word. I'm not saying gamification can't be good, or can't be turned to good use. But if we're just throwing the deck of cards in the air, let them fall where they may, the worst-possible outcome is virtually certain...

    "If you want a picture of the future, imagine a boot stamping on a human face—forever." (Orwell, 1984)

    Last edited by ClaytonB; 01-02-2025 at 09:13 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  16. #344
    Fight fire with fire -- (the good stuff starts @12:48):

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  17. Remove this section of ads by registering.
  18. #345
    Dolphin 3.0 8B released

    Dolphin is an open-source, 100% local LLM that you can run on your own machine with full control. You do not need exotic hardware, most gaming PCs can be set up to run a local LLM with decent performance. This model is competitive with start-of-the-art LLMs like Claude and ChatGPT, although your actual performance will scale with the performance of the hardware you have. Unlike censored models like ChatGPT, etc. Dolphin is fully uncensored and will answer any question you ask (within its capabilities), no matter how "politically incorrect" or "dangerous" it might be.

    My belief: The right to run local AI is poised to become the 2nd-and-a-half Amendment...

    From the model card:

    What is Dolphin?

    Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

    Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.

    They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
    They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
    They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
    They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.

    Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.

    https://erichartford.com/uncensored-models
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  19. #346
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  20. #347
    Gen-Z are a different breed...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  21. #348
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  22. #349
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  23. #350
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  24. #351


    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  25. #352
    DeepSeek R1 continues smashing OpenAI...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  26. Remove this section of ads by registering.
  27. #353
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  28. #354
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  29. #355
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  30. #356
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  31. #357


    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  32. #358
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  33. #359




    Last edited by ClaytonB; 01-28-2025 at 07:46 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  34. #360
    CLIP from SYSTEM UPDATE #397:

    China's Great Leap Forward in AI: What Does it Mean? With Journalist Garrison Lovely
    https://rumble.com/v6edx37-chinas-gr...garrison-.html
    {Glenn Greenwald | 28 January 2025}




  35. Remove this section of ads by registering.
Page 12 of 13 FirstFirst ... 210111213 LastLast


Similar Threads

  1. The Singularity of Civil War Is Near
    By Anti Federalist in forum U.S. Political News
    Replies: 17
    Last Post: 08-11-2020, 04:37 PM
  2. Replies: 0
    Last Post: 05-13-2018, 06:41 PM
  3. From the Big Bang to the Internet — to the Singularity?
    By Ronin Truth in forum Science & Technology
    Replies: 14
    Last Post: 07-04-2014, 08:09 AM
  4. AI/Singularity Fear Mongering - A response
    By mczerone in forum Science & Technology
    Replies: 1
    Last Post: 12-20-2013, 03:53 PM
  5. Replies: 0
    Last Post: 05-17-2012, 02:21 PM

Select a tag for more discussion on that topic

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •