Site Information
About Us
- RonPaulForums.com is an independent grassroots outfit not officially connected to Ron Paul but dedicated to his mission. For more information see our Mission Statement.
The Kingdom of God has come upon you. -- Matthew 12:28
This is BIG:
“Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
It might not sound like much, but this is a canary-in-the-coalmine for AGI. If you're not facile with machine-learning terminology, this might sound a lot more complicated than it is. In the video, he expresses astonishment that this technique works, but it's not too difficult to understand why it works. Let's say we have two identical positive reviews: A:"This movie was amazing!", B:"This movie was amazing!" Both are obviously marked "positive" sentiment in the training data-set. When we concatenate the strings together, "This movie was amazing!This movie was amazing!" and compress it, the compression algorithm is smart enough to recognize that the string has simply been repeated and will encode it that way: "[This movie was amazing!]\r" where I am using "\r" to represent whatever escape-code the compressor uses to indicate "repeat the last string in brackets." It won't be quite that simple in the compressed format, but this is one of the capabilities of any SOTA compressor. So, when you measure the "length" of the two compressed strings when joined, it will be barely longer than either of the original strings. So, the compression algorithm is measuring commonality, but it's doing it in a very sophisticated way.
Let's take two opposite reviews. A:"This movie was amazing!" B:"This movie was horrible!" Again, the compressor is smart enough to recognize that "This movie was " is common to both strings. So, it will use some short code to repeat that: "[This movie was ]amazing!\rhorrible!" It's not exactly like this, but this is the basic concept of what is happening internally. And the compressor is much more sensitive to repetitions and statistical patterns than I am describing here.
In information theory, this kind of measure is called the mutual information. And that's really what ncd is acting as a proxy for.
We are on our way to a computable approximation of AIXI (general-purpose AI with provably optimal behavior)...
Last edited by ClaytonB; 08-13-2023 at 04:37 PM.
The Kingdom of God has come upon you. -- Matthew 12:28
This is vastly more dangerous than you probably think:
"What, are you suggesting that I should be afraid of this clunky bit of stage animatronics?" Yes. We are building the device of our own extermination...
The Kingdom of God has come upon you. -- Matthew 12:28
Microsoft Is Betting Heavily On The Future Of AI
https://www.thefinancialtrends.com/2...-future-of-ai/
Never attempt to teach a pig to sing; it wastes your time and annoys the pig.
Robert Heinlein
Give a man an inch and right away he thinks he's a ruler
Groucho Marx
I love mankind…it’s people I can’t stand.
Linus, from the Peanuts comic
You cannot have liberty without morality and morality without faith
Alexis de Torqueville
Those who fail to learn from the past are condemned to repeat it.
Those who learn from the past are condemned to watch everybody else repeat it
A Zero Hedge comment
The Kingdom of God has come upon you. -- Matthew 12:28
The Kingdom of God has come upon you. -- Matthew 12:28
AI david attenborough narrates a documentary about my cat...
https://www.youtube.com/watch?v=eTb-7Gn-63o
This AI needs to stop immediately
As AI rapidly improves with Midjourney and ChatGPT, other forms of Artificial Intelligence are advancing that we do not see as much. As people are using AI art and social media to find a way into making money, But AI social media models and infulencers are probably the least of our worries now.
https://www.youtube.com/watch?v=lXdCOLSQqXs
exurb1a knocks it out of the park...
The Kingdom of God has come upon you. -- Matthew 12:28
These robots are moving a thousand times more fluidly and gracefully than the best-in-class robots that were being expo'd even just a year or two ago. Why are they so much more fluid? Because their actions are not merely "reflexive responses" to inputs, based on blind training against input-output pairs. Rather, the more sophisticated robots are surely running a transformer (similar to the architecture that powers GPT), and this enables their responses and motions to be generated at a very high "semantic" level. Mimicry is relatively trivial, but getting robots to generate motions that exude some kind of "intentionality" is almost exactly the same problem as getting "sensible" responses from a chatbot, it's just that the medium is not text-chat, it is servo-motors and cameras. To the neural nets, internally, it's all the same. A neural-net has no idea what "words" are, nor "cameras" or "motions". The input is just a stream of numbers, and the output is also just a stream of numbers. Thus, "I wave goodbye" and actuating servo motors to make a waving-goodbye motion are essentially the same thing, the only difference is the encoding, that is, "I wave goodbye" is ASCII characters, whereas a physical waving motion is generated by the commands to the relevant stepper motors.
Based on current progress, I feel confident suggesting that we could have a fully-conversant and fully-interactive droid on a 60 Minutes special or something, within a year, something that will make Sophia look like 1980's animatronics. Eye/face-tracking, attentional focus, apparent "state-of-mind", intelligible and contextual facial gestures (still some uncanny valley, similar to the way GPT can run off into the weeds for no apparent reason), real-time voice-to-text-to-voice transcription, and so on. This will be a big deal culturally because when people see something on their screen that is really responding, live, to an interviewer and which is clearly not just some kind of Hollywood lights-and-magic trick, the New Ager/transhumanist dam is going to burst. People are going to go absolutely bananas on this stuff, unlike anything that ever came before. You thought Beanie Babies was a craze? Wait till people are shopping the latest (literal) iRobot in the Apple Store. The amount of obsession over these machines is going to be like nothing ever seen before.
Last edited by ClaytonB; 08-21-2023 at 11:06 AM.
The Kingdom of God has come upon you. -- Matthew 12:28
I'm only surprised it took this long. Police cannot solve this in any way, shape or form. Welcome to Sodom 2023...
In Spain, dozens of girls are reporting AI-generated nude photos of them being circulated at school -- While the police investigate, the mothers of the affected have organized to take action and try to stop those responsible
The Kingdom of God has come upon you. -- Matthew 12:28
Last edited by ClaytonB; 09-23-2023 at 06:00 AM.
The Kingdom of God has come upon you. -- Matthew 12:28
Where would we be without AI?
![]()
The Kingdom of God has come upon you. -- Matthew 12:28
Did I mention that we're in the AI singularity?
The Kingdom of God has come upon you. -- Matthew 12:28
Still safe for now...
![]()
The Kingdom of God has come upon you. -- Matthew 12:28
Could paradoxes be the key to separating genuine humans from machine NPCs?
![]()
The Kingdom of God has come upon you. -- Matthew 12:28
The Kingdom of God has come upon you. -- Matthew 12:28
The Kingdom of God has come upon you. -- Matthew 12:28
https://www.androidauthority.com/sna...ained-3329362/Qualcomm’s Snapdragon 8 series of chipsets powers most high-end Android phones on the market. The company has now peeled the curtain back on its latest flagship processor, the Snapdragon 8 Gen 3.
Between the revised CPU, tweaked GPU, AI enhancements, and new camera tricks, there’s no shortage of improvements and new additions here.
...
Generative AI is everywhere, and Qualcomm is taking advantage of this trend. The company says that the Snapdragon 8 Gen 3’s upgraded Hexagon NPU is designed with generative AI in mind. Headline improvements include up to 98% faster performance than the previous generation, a 40% efficiency boost, a two-fold boost to bandwidth in large shared memory, and more bandwidth feeding the Tensor Accelerator. Whealton says it’s also implemented a separate voltage rail for the Tensor Accelerator, allowing the NPU and Tensor silicon to each run at different power levels for a better balance of performance and efficiency.
Qualcomm says the chipset supports large language models with over 10 billion parameters running at almost 15 tokens per second. So what do all these improvements mean for actual use cases?
One major benefit is that you can expect much faster image generation via Stable Diffusion. Qualcomm previously demonstrated on-device Stable Diffusion on a Snapdragon 8 Gen 2 reference handset, taking over 15 seconds to generate an image from a text prompt. However, the company says Stable Diffusion now takes less than a second to generate an image. The company also says it’s working with Snapchat to implement this faster Stable Diffusion solution in the app.
Another interesting addition is “on-device personalization” for AI. Qualcomm says it’ll use your device’s sensors (e.g. GPS, Wi-Fi, microphone, Bluetooth, camera) to personalize chatbot queries. So if you were to ask a chatbot about the best restaurants or activities to do, you can expect more personalized responses based on your location and other factors instead of having to explicitly specify this in your query.
Qualcomm is also touting the privacy benefits of on-device personalization. The company sought to assuage concerns that apps would have access to this personalization data. Vinesh Sukumar, Qualcomm’s head of AI and machine learning, claimed that any app using this function would only get a “refined element of the input prompt that is filtered” before it gets to the app. He added that this personalization data is discarded after a prompt is generated.
Either way, Qualcomm will showcase an AI system demo running on-device at the Snapdragon Summit, powered by Meta’s Llama 2 LLM. The company notes that this demo offers “end-to-end” voice support, so you can talk to the chatbot and have it talk back to you.
Finally, the Snapdragon 8 Gen 3 will pack support for multi-modal generative AI models. That means you can input text, images, and speech and have these generative models output text, images, and speech in return.
This improved generative AI support entails more than just better voice assistants and failed attempts at naughty AI-generated art, though.
...
In AI powered future, cell phones use you!
~~~
More:What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.
That's according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses.
...
https://arstechnica.com/tech-policy/...uin-your-life/
Fortunately, the open-source AI models community is light-years ahead of this crap. Start with the r/LocalLLaMa sub-reddit to start pulling on the sweater-thread of running your own GPT-3 or better AI model, 100% local, on your own personal hardware. To get a feeling of just how extensive the community is, check out this list of open models. That's just a sampling of the most popular models, the sheer number of random models out there is uncountable. Good luck regulating that...![]()
The Kingdom of God has come upon you. -- Matthew 12:28
..
Last edited by acptulsa; 11-02-2023 at 06:14 AM.
The Kingdom of God has come upon you. -- Matthew 12:28
The Kingdom of God has come upon you. -- Matthew 12:28
Connect With Us