Page 10 of 11 FirstFirst ... 891011 LastLast
Results 271 to 300 of 313

Thread: We are in the AI Singularity

  1. #271
    AIs don't just hallucinate, they have shared hallucinations...



    Arxiv.org | Shared Imagination: LLMs Hallucinate Alike

    Despite the recent proliferation of large language models (LLMs), their training recipes -- model architecture, pre-training data and optimization algorithm -- are often very similar. This naturally raises the question of the similarity among the resulting models. In this paper, we propose a novel setting, imaginary question answering (IQA), to better understand model similarity. In IQA, we ask one model to generate purely imaginary questions (e.g., on completely made-up concepts in physics) and prompt another model to answer. Surprisingly, despite the total fictionality of these questions, all models can answer each other's questions with remarkable success, suggesting a "shared imagination space" in which these models operate during such hallucinations. We conduct a series of investigations into this phenomenon and discuss implications on model homogeneity, hallucination, and computational creativity.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  2. Remove this section of ads by registering.
  3. #272
    AI Hype: “Billions of dollars will be incinerated” Business Analysts Warn
    https://www.youtube.com/watch?v=Ljztbcdzkeo
    {Sabine Hossenfelder | 25 July 2024}

    We’ve all heard how artificial intelligence will supposedly bring giant boosts in productivity, take our jobs, and end humanity, but some business insiders think that AI isn’t going to have remotely as big of an impact as others have said. Indeed, they say that billions of dollars will go to waste.




  4. Remove this section of ads by registering.
  5. #273
    TW: If things like Eldritch abominations bother you, this is probably not the video for you.

    Those who think they are ready to play mindf-- games have no idea the fire they are really playing with...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  6. #274
    Oh yeah, AI is literally going to take over the world. It's way smarter and more creative than humans...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  7. #275


    The very same people telling us that they could not program before LLMs are the ones who keep confidently informing us that "all jobs are over", "human thinking is obsolete", etc. Well, since y'all admit you are so dumb, why the hell do you keep lecturing the rest of us about how obsolete we supposedly are? Speak for yourselves. You are thrilled that you can program now that there are LLM programming-assistants. Good for you! I'm genuinely happy for you! I also find LLMs useful, and they have helped me figure out how to solve problems that, without the LLM, would simply have been too much of a headache to bother with. Could I have done it without the LLM? Sure, it probably would have taken 10-20 hours worth of effort digging through obscure documentation and I would eventually have figured it out. But that's 10-20 hours I could spend working on other, more important things. Being able to just ask the LLM, "How do you write a Hello-World in XYZ language?" shaves that previously painful and laborious task down to 15 minutes. Huge boost! Amazing! But human-obsoleting it is not!! I hate to be rude, but there comes a point when it's time to call a spade a spade: If you think that LLMs are going to obsolete human thinking, that's because you're a moron. Rather than loudly proclaiming your idiocy to the world by informing us that LLMs have made human thought obsolete, you should follow the old adage: "better to remain silent and be thought a fool than to open your mouth and remove all doubt." Enjoy the benefits of LLMs and shut up with the non-stop black-pilling and doomerism... it's tiresome and juvenile in the extreme.

    Last edited by ClaytonB; 07-28-2024 at 04:36 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  8. #276

  9. #277
    Quote Originally Posted by Occam's Banana View Post
    //
    Bingo.

    In other news, the honeymoon period is coming to an end...

    gpt sucks [Reddit]

    I have heard current-gen AI called "a tractor for the mind" and I think that's a great way to think about it. A tractor can replace the manual labor of 100 men, or more. But a tractor does not replace the farmer himself. Somebody's still got to go out there and run the tractor, maintain it, configure it with the proper implements, and so on, and so forth. And not only do you need muscles for that, you need a brain. And even though AI is useful for tasks that are not manual-labor (muscles), they are still manual effort tasks. But somebody's still got to do the setup, monitoring, teardown, and so on, and so forth. Obviously, we want to automate everything that can be automated (with an economic net-benefit), but the 2001:Space Odyssey nerds need to back off the hype. In fact, AI is currently beyond hype, it's in an outright mania. People going crazier over this stuff than my generation did over Reebok pump shoes. Some people just need chill out, seriously...

    Last edited by ClaytonB; 07-29-2024 at 07:46 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  10. #278
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  11. #279
    They're doing it. The Ministry of Truth is now real:

    An entire fb page of AI women during the Vietnam war. r/ChatGPT

    Are you ready for our Brave, New Future?

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  12. #280
    ChatGPT is literally retarded.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  13. Remove this section of ads by registering.
  14. #281
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  15. #282


    Pretty good video overall. Avoids most of the overhype and basic logical errors.

    One statement in the video is incorrect, it is stated that we don't know how current-generation AI works and that's not quite correct. We know, we just don't have the resources to decode it all. Some AI uses artificial-neural-nets in a very opaque way but no AI must be opaque. It's just easier, cheaper and faster to design it opaquely. This is the same as any field of engineering. If you give me a year to write an app, I can write it in a way that is self-documenting, extensible, easy-to-debug, etc. If you give me two weeks, I might be able to meet all the user-visible specifications, but I will have to throw out all the other aspects of the design that would make it a good design. The app is liable to have a fatal design flaw that is not economically fixable, requiring a from-scratch re-design.

    Artificial neural-nets are the ultimate extreme of this principle... if you don't care at all about "how it works", just throw it in a neural-net and use back-propagation to "magically" meet the design-requirements. Internally, the neural-net is a giant rat's nest of insanity. But that's because it's the worst possible form of design. Doesn't mean there is no place for neural-nets, you just don't want to found all of civilization on them!!

    The idea of an "intelligence explosion" is widely misunderstood, and one of the reasons I started this thread after ChatGPT was released, is that I could plainly see that we (generally) are currently very far off course, and headed towards waters infested with very large icebergs. I've raised this principle already in this thread, but I'm going to underscore it again: Those who demand intelligence (as a product) the most are, by simple logic, those who understand it the least. AI cannot make you intelligent (or make human intelligence obsolete) any more than a woodshop can make you a carpenter, or make carpenters obsolete. A lot of the mythos surrounding AI/AGI/ASI is delusion, hype and even mania.

    The advent of general-purpose AI in the current cultural zeitgeist would be a horror approaching the level of a living nightmare. It's not so much that "the AI would take over" as it is that those who believe the AI will take over will take over. Why? Because an AI that is believably personal (really seems to "think"[1]) would essentially become a tangible god-on-earth. And this is why the modern cultural zeitgeist is a veritable tinderbox to the open flame of an AGI. For generations, the West has been infused with atheistic and materialist thinking, so the idea of something more intelligent than us seems unthinkable to most people in the West. 150 years ago, you could hardly have found a person who did not believe in higher intelligence (God, the angels, and perhaps other spirits). Today, the proportion is practically reversed. And so, the problem with AGI is not AGI-itself, the problem is AGI in the current cultural context of an almost absolute spiritual vacuum.

    If you've ever been around heavy equipment operators and seen a machine reach its limit, it's interesting to watch the reaction of the operator. Often, the operator will first be puzzled. Then, they will realize that they hit the limit. Then, they will try to find a way around the limit (perhaps by repositioning), and, finally, they may become irritated or even try to overdrive the machine (sometimes leading to equipment failure). Compared to digital computers, heavy machinery is a very mature technology, and so this is one of the reasons why hitting machine limits in the case of heavy machinery may be surprising or frustrating. We're just very accustomed to everything working as expected. But in the case of digital computers, users have been frustrated with them pretty much from day one. It is a very immature technology. LLMs are the first truly watershed breakthrough in computing technology in respect to the usability of computers.

    Sadly, this revolutionary invention is being misconstrued by many actors for a wide variety of ill motives. As someone who has been programming computers since I was a child, and went into high-technology as a computer (hardware) engineer, I "speak computer". What the non-programmer can make a computer do with the help of an LLM and a user-friendly programming-language like Python is truly amazing, it would have been unthinkable even 10 years ago. But this is a revolution for non-programmers, not for programmers. I don't mean to say it's not an extremely powerful tool even for programmers and engineers. It is. But it's not revolutionary for us in the way that it is for the non-programmer on the street. I welcome all non-programmers who want to use LLMs to help them program computers. It's amazing and valuable that they can bring their concepts into reality with the help of LLMs, something they could not do before. This will yield enormous economic benefits for all of us. But that's still not "intelligence explosion".

    Once we get past these early hurdles (kind of like Tesla and Edison vying for DC versus AC on service mains), the real intelligence explosion will start. It might be a long time yet, I don't know. But the real intelligence explosion won't happen on GPUs. It will happen on a non-digital substrate. And the real intelligence explosion won't be about "model weights"... it will be about energy, in particular, energy-per-compute and, eventually, energy-per-action. A honey bee uses almost no energy. The sum total of energy used by all honey bees in the world is enormous, far more than the energy consumed in the form of food calories by humans. That's because, IIRC, there is more biomass of bees than humans (definitely true of all insects, but I'm pretty sure just bees alone outweigh us). The point is that "the honey bee neural net" is able to command all of that energy in our environment, and it does so in a form-factor that uses probably a millionth as much energy as a GPU, maybe even a billionth. We are not thinking clearly about AGI and the "intelligence explosion".

    The primary reason for this is because we have deep, unaddressed spiritual issues in our culture that need to be cleared out of the way before we will be able to think clearly about these issues! Really, we're looking for God. The author of the video accidentally admits this -- AGI might be "a god in a box". We're trying to build a human-made idol to take God's rightful place. It's an absolute absurdity but, unfortunately, it appears that the wheels of absurdity must roll until they seize up under the weight of their own contradictions. In the meantime, I will keep sending messages in a bottle, in the hope that someone in the Cosmos will see them and understand what is happening in this sector...

    ---

    [1] Even ChatGPT is clearly not "thinking" in the way we do. It's something like a bizarre hybrid of goldfish-level intelligence married with PhD-level fluency in English...
    Last edited by ClaytonB; 08-06-2024 at 09:50 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  16. #283
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  17. #284
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  18. #285


    The untold story here is that AI tech is being driven, in part, due to the collapse of Moore's Law. The exponential physical scaling of silicon transistors from ~1975 to ~2010 was so intense that AI research was often obsolete by the time it could be published. "We made an AI algorithm that performs 25% better than hand-crafted algorithms!" is irrelevant when my hardware can run the old hand-crafted algorithm at 200% of the speed it could be run when you started your research.

    More importantly, this explains why a shift away from digital is inevitable unless there is some magic breakthrough in quantum physics that makes room-temperature, desktop-sized, quantum computers possible. I won't be holding my breath. We already have multiple technological alternatives to digital logic that will allow the energy performance numbers on real AI/ML workloads to be boosted 1,000x or more. See the last paper I linked above. Obviously, we're not going to "scrap" digital. Digital will continue to be the beating heart of your PC system. But we need high-performance, general purpose accelerators that will allow these currently power-intensive AI tasks to be performed at massively lower energy profiles. That's not just some blue-sky speculation, it's absolutely possible today, with existing tech, we don't even need new toolchains... literally just make the design and send it through.

    As he notes in the beginning of the video, 1T (single-thread) CPU performance is tapering. Multi-threading is still providing performance boosts, not quite at pace with Moore's law, but still significantly better than flat-line. And GPUs are providing data-center scaling because GPUs can be endlessly glued together like Lego bricks to build ever bigger compute clusters with sub-linear overhead costs. So, if you are operating at AWS scale, GPUs are the future (for now). But the problem is that industry (including a lot of the leading tech-industry "decision-makers") don't understand one of the most basic principles of computational complexity theory: proving lower bounds is extremely hard. What does this mean, and why is it relevant?

    In most industries, we have some kind of "basic unit" for every commodity. Even tools and other complex machinery will eventually get tranched by the market. So, you have various tiers of combines for corn-harvesting. Or various tiers of fencing for cattle ranches. Various tiers of feedstock. Various grades of bolts. Various grades of wheat. And so on, and so forth. Everything eventually gets graded, and those grades become the "Lego bricks" of that industry. If you need to do a job requiring X grade of equipment or materials, then your acquisitions department knows what to order. If you have some X+1 grade materials available, and they have no better use, then you could use them instead to clear out inventory that's just taking up space, and avoid placing a new order for materials. But if you have some X-1 grade materials, that's not going to work, because the job requires X grade. This is the bread-and-butter of most commercial activity, this is the beating heart of the economy, this is "how things get done".

    In the tech industry, there has been a long and gradual process of taming the computational beast. Over time, there has been a continual drive towards standardized units of data storage, standardized units of networking, and standardized units of compute. However, computing itself is a wholly unique commodity from all others, because computing can't actually be reduced to basic units, as practically all other commodities can. Why not?

    The reality is that there are two ways of looking at computing. Computing can be thought of as the activity of performing a computation. So, for example, if you want to multiply two numbers you can use the high-school add-and-shift algorithm to compute the result, and we can call this process "computing". The other way of viewing computing is in terms of definition -- a multiplication is just whatever activity gives the correct answer at the output. For example, I can use a slide-rule to multiply any 3-digit number with any other 3-digit number in a single motion. 6-digit numbers can usually be multiplied in 2 motions. And so on. There are other ways to perform a multiplication involving many digits in a single motion. If I do not specify how you are to multiply, only that the answer be correct (as checked by division), then multiplication is not any specific action, it's any of an infinite family of possible actions which will always give the correct result.

    The problem arises when you try to define a "basic unit of multiplication". In reality, there is no such thing or, if there is, we can only define it in some purely abstract sense that is not applicable to real multiplications. The reason for this squishiness is that multiplication algorithms are always susceptible to improvement. Today, the fastest known symbolic matrix multiplication algorithm has complexity around O(n^2.371552), according to Wiki. While we know that exact matrix-multiplication can't be done faster than O(n^2), there is still a lot of mathematical headroom between 2.371552... and 2. That 0.3715... translates to gigawatts of energy that is spent every year on matrix multiplications that, if the exponent were somehow reduced to 2, would not be required. While nobody has yet improved on this latest exponent, there's nothing stopping a new and better exponent from being discovered next month.

    The "squishiness" underneath basic elements of compute makes the idea of a "basic unit of compute" effectively impossible, and this makes computation unlike, say, oil. Oil has a basic unit called a barrel. This basic unit is what allows the market to bring its enormous allocation forces to bear on the oil industry. But compute breaks that model, at least, if you try to naively apply it to compute without understanding why it can't be naively applied.

    AI will help solve a lot of this. But first, we're going to have to let the power-hungry digital logic go. AI "fuzzifies" compute, and that's exactly what you need in order to build some kind of basic-unit of compute. AI art shows this principle in action. "Hi, I need an image of a tabby cat leaping through the air with a park in the background for my marketing brochure" ... does the customer really care how this image is generated/created? Probably not. So, you can go down to the park with your cat and a camera OR you can fire up GIMP and Stable Diffusion and use AI to generate a "fake" image that serves the customer's needs just as well as taking the photo manually would have. Notice the parallel to the paragraph above where I drew the distinction between high-school multiplication and using a slide-rule or another mechanical method for calculating a multiplication. If the customer requires that a specific cat be photographed in a specific park (and they're willing to pay for that), then so be it, we'll do that. But does anyone care that a multiplication be done using the high-school add-and-shift method?? Of course not. So, computational tasks are inherently indifferent to the "how", and only ever actually care about the "what" (the correct answer). This is why there is no such thing as a basic unit of compute, and cannot be, in that sense.

    But with AI, we can actually use the AI layer to chop up problems into basic units for us. That's essentially what we are paying software engineers to do for us today. The problem is that the demand for software engineers is effectively infinite because there is always more chopping that could have been done. Consider the accounting department of a large company. They've digitized everything so that all their accounts can be tracked digitally, charted, etc. However, nobody counts the screws that are used by the assembly team because they are just such a minor line item. Technically, the efficiency of the company can be improved by tracking those screws, too. But who's going to actually do that task? Are you going to hire someone to be the "screw-counter"? And can that actually be profitable given that you have to now pay this person's salary in order to count screws?? That's where AI fits, because it can perform these insanely fiddly tasks that nobody can economically do manually, but which do yield true efficiency improvements. Specifically, in the compute-space itself, you can think of any compute workload as a blob of "stuff" that needs to be done. That blob can be broken down a zillion different possible ways. Software engineers will do it in their particular way. It will be good, but of course, nothing is ever perfect, especially on a timeline. So there is always headroom for improvement. That headroom was inaccessible before because, like screw-counting, it always meant "one more head" to do the actual work of slicing up the work in that headroom space. We can now use AI to "churn" on such tasks and generate useful reductions of large, real-world compute tasks into many specific compute tasks which are then sent to be processed individually.

    Note that what I'm describing in the previous paragraph is not that much different from the CPU/GPU architecture. This "small, fast, central brain with massively parallel auxiliary compute"-model really scales up to everything, even data-center compute. The data-center has a control center, and the control center is what you're actually talking to when you dispatch your compute workload. Once the control center receives the compute work order, it allocates servers/GPUs to do that compute, and then dispatches the compute to them. The more we "AI-ify" this model, the more closely we can approach full utilization of systems, and minimize latencies resulting from system wake-from-idle, and from demand peaks (using price-tiers to smooth out peak demand). Again, don't just think in terms of specific compute jobs like "run XYZ software program with ABC input", rather, think of more fuzzy tasks like, "Draw an image of a cat leaping through the air with a park in the background", and leaving it to the AI to figure out how to break down that task into individual "units of compute". Ironically, the more fuzzy we make the "unit of compute", the more we can commoditize it. This is exactly backwards of intuition, and shattering this pervasive misconception will be key to making real, forward progress in compute-scaling, instead of just feeding into the AI hype/mania.

    GPUs are an absolutely insane substrate for AI/ML. It makes no sense whatsoever to be pushing all these fuzzy/approximate compute workloads through the power-hungry, exact multiplication circuits of GPUs. An approximate multiplication will serve just as well, which is why quantum computing can even possibly be considered as a candidate for replacing digital compute in the AI/ML space.
    Last edited by ClaytonB; 08-10-2024 at 10:43 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  19. #286
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  20. #287

  21. #288
    AI models fed AI-generated data quickly spew nonsense | Nature -- Researchers gave successive versions of a large language model information produced by previous generations of the AI — and observed rapid collapse.

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  22. Remove this section of ads by registering.
  23. #289
    With all the doom and gloom I post in this thread about AI (in general), it might seem that it's no serious threat. Just because AI is basically retarded doesn't mean it's not a danger. The AI image-generators have been generating "better-than-real" images for well over a year now, meaning, the images the AI produces often have more "pop" than photos taken by humans of real scenes and subjects. But the skeptics and curmudgeons among us have pointed out, "Well, we can still tell it's not real because it's just too damn perfect." And that has worked until recently, with the latest innovation called Flux. Flux is a LoRA (post-processor) that takes a realism-generated image and just makes it even more realistic... essentially by "adding back the warts". The moral of the story: don't just assume you'll be able to tell which photos are AI and which are not. AI may be retarded but AI-fakery is a lot better than you probably think...

    Last edited by ClaytonB; 08-17-2024 at 07:39 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  24. #290
    Quote Originally Posted by ClaytonB View Post
    Extending the theme of AI-fakery, I think it's worth mentioning that 2024 is probably going to be the first time we see AI fakes used for political purposes. It's already happening to some extent, but we haven't hit the whitewater yet. Here is the recipe for the toxic stew that is about to be served up: infinity-$$$ (the Fed) + infinite-fake-identities (AI). They don't need to bus in millions of illegals to place fake votes. All they need is a can of shoe-paste to change their hair-color and a good color printer to print up fake IDs. One man, 100 votes, maybe more. A small team of 20 such individuals could throw any particular precinct to the Blue. Hitting select precincts can throw a whole State. Keep moving, keep changing identity, keep voting at different precincts. Move from State-to-State, following the early-voting deadlines. And so on. We have so much "slack" in the system that there is simply no way to "securely" vote for POTUS, and that's exactly what they want.

    Others saw this problem years before I started caring about it. But as long as we have this POTUS Ring-of-Power, this modern Caesarship, then who sits in that office (and who they work for) is the single most impactful factor in the lives of all Americans -- the government consumes nearly half of everyone's paychecks and, via inflation, is effectively stealing well over half of the entire economic product of Americans. That means they have more real resources at their disposal than the rest of us put together, and that is the problem. When you mix those resources with AI which is a 24x7 factory of fake imagery, fake identities, etc. this is extremely dangerous and Americans have to wake up this time, precisely because they're doing it out in the open now, so we "don't have an excuse" (that's how "they" think about these things).

    To clarify, the way that AI boosts the manufacture of fake identities is that you no longer need someone with specialized CIA training to be able to do it. You give the AI your photo and you say, "Generate 50 different samples of this face, with various features changed, such as add a mustache, change the hair/eye color, add glasses, etc." and it just spits those out for you. (Yes, some tweaking involved, but it's pretty much push-button). In addition, you can invent fake names, ID and even whole back-stories for these fake personas, in the same way. Your kit is: removable hair coloring, hat/sunglasses/mustache, hair extensions, colored eye-contacts, different clothing, some color-printed ID cards (can be acquired at modest expense) and a few other common items to sell the whole package.

    My point, here, is not to say "this is how they'll do it", I have no idea what they'll do. The point is that this is one way it could be done, and when you have a multi-trillion dollar prize in a safe whose combination can be picked by simply feeding some prompts into an AI, printing some fake IDs and donning a few common disguises, you're talking about a situation that is effectively equivalent to posting the active nuclear codes to 4chan. It will be stolen, there is no "if".

    In 2024, infinity-cash from the Fed and infinite-fake-AI-identities is a sure recipe for disaster. I hope that the adults-in-the-room (hello, anybody out there?) are paying attention... but I somehow think we would never have gotten to this point if they were...
    Last edited by ClaytonB; 08-21-2024 at 07:41 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  25. #291
    It doesn't take AI to create a false ID. As we saw four years ago, it doesn't take tens of thousands of fake IDs to steal an election.

    While we play in our rabbit holes, they're just using our money to buy ballot drop boxes with wider deposit slots, better able to accommodate stacks in the middle of the night. And everybody's too busy making up scenarios to get uptight about to identify the real problem and fix it.
    Last edited by acptulsa; 08-21-2024 at 08:41 AM.

  26. #292
    Quote Originally Posted by acptulsa View Post
    It doesn't take AI to create a false ID. As we saw four years ago, it doesn't take tens of thousands of fake IDs to steal an election.

    While we play in our rabbit holes, they're just using our money to buy ballot drop boxes with wider deposit slots, better able to accommodate stacks in the middle of the night. And everybody's too busy making up scenarios to get uptight about to identify the real problem and fix it.
    You've missed my point. Ballot-stuffing doesn't work in some jurisdictions, especially the more red jurisdictions. "Try that in a small town." What I'm trying to warn those who will listen is that this is coming to a small town near you. If you've already figured it all out, and you don't need any input on technology from someone whose education and professional career is in technology (as an individual-contributing engineer), feel free to ignore it as you wish.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  27. #293
    Understanding accelerationism is important to understanding what is really driving the development of AI:

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  28. #294
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  29. #295
    As an engineer, this was absolutely hilarious, and oddly true in a poetic sort of way. As the unraveling of Clown World begins to accelerate, expect to see a lot more of this kind of "logic" explaining why the world no longer makes any sense at all, not even by one iota. "Oops, there was a bug in the AI. Sorrrrry 'bout that..."

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  30. #296
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  31. Remove this section of ads by registering.
  32. #297
    Will AI be used to steal the 2024 election... AGAIN?

    Are the pollsters talking to "Kamala supporters" even talking to actual human beings?

    How would they know?

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  33. #298


    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  34. #299
    Here we go...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  35. #300
    Open-source model set to clean ChatGPT's clock....

    Last edited by ClaytonB; 09-06-2024 at 01:38 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

Page 10 of 11 FirstFirst ... 891011 LastLast


Similar Threads

  1. The Singularity of Civil War Is Near
    By Anti Federalist in forum U.S. Political News
    Replies: 17
    Last Post: 08-11-2020, 04:37 PM
  2. Replies: 0
    Last Post: 05-13-2018, 06:41 PM
  3. From the Big Bang to the Internet — to the Singularity?
    By Ronin Truth in forum Science & Technology
    Replies: 14
    Last Post: 07-04-2014, 08:09 AM
  4. AI/Singularity Fear Mongering - A response
    By mczerone in forum Science & Technology
    Replies: 1
    Last Post: 12-20-2013, 03:53 PM
  5. Replies: 0
    Last Post: 05-17-2012, 02:21 PM

Select a tag for more discussion on that topic

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •