#16
|
|||
|
|||
It's also a mega hog of grid power for processing and cooling. I have yet to be impressed any of its capabilities that I've seen so far. AI is a great metaphor for our dumbass arrogant species in general.
KJ |
#17
|
|||
|
|||
Yes, because at the highest level, physics is a mathematical model of a phenomenon whether that is the movement of elementary particles, cosmic phenomenon or the activity of thought.
It's just a math representation of our brain and how it works. |
#18
|
|||
|
|||
So an AI will become a politician and eventually president of the United States?
|
#19
|
|||
|
|||
A few people seem to be vocal and visible on this. Geoffrey Hinton has gotten a lot of press on this issue. The drama at OpenAI a few months back had to do largely with those who advocated a more cautionary approach - Hinton's former student Ilya Sutskever among them - and those for whom innovation was a more primary concern. But I dont get a sense that there is a unified hue and cry from the AI research community on this.
|
#20
|
|||
|
|||
A truism.
__________________
A bad day on the bike is better than a good day at work! |
#21
|
|||
|
|||
Quote:
"tools from physics to develop methods that are the foundation of today's powerful machine learning". Again, I think that is a stretch at best and probably just flat wrong. My cynical view is that they wanted to recognize Geoffrey Hinton because of his celebrity coupled with his dire warnings about the dangers of AI. Dont get me wrong - I totally think Hinton is deserving of this. As someone who teaches graduate level courses in machine learning and deep learning I have a great appreciation of his contributions. But I am not sure they would have done this if he were not a vocal doomsayer, for lack of a better term. The remarks from Hinton and the Nobel committee seem to support this as they are all about the dangers of AI. I guess they felt they had to put him in some existing category rather than instantiate a new one. |
#22
|
|||
|
|||
What if a smart human just unplugs the computer
|
#23
|
||||
|
||||
It will sing “A Bicycle Built For Two.”
__________________
It don't mean a thing, if it ain't got that certain je ne sais quoi. --Peter Schickele |
#24
|
|||
|
|||
SkyNet will not have a plug humans can reach
The Forbin Project was not evil, it was a saviour. It said so. Last edited by Fat Cat; 10-09-2024 at 10:02 AM. |
#25
|
|||
|
|||
"I'm sorry, but I can't do that, Dave".....
AI. Oy....
__________________
“A bicycle is not a sofa” -- Dario Pegoretti |
#26
|
||||
|
||||
Quote:
The problem is always us, our collective greed and gullibility with respect to anything new and shiny. Here is a great case for why the technology will not autonomously devour the world.
__________________
Jeder geschlossene Raum ist ein Sarg. |
#27
|
|||
|
|||
Quote:
I.e. what is the most probable answer (or more technically - what is the most probably sequence of words in response to the input) based on the information the generative AI has at hand. Most of the time, generative AI is able to produce output that resembles something useful given that most engines have access to basically all available textual information. But as you describe, if the answer to your question doesn't exist inappropriate sources can be used. I don't necessarily think this alone is the problem, the problem is people think that generative AI is magic rather than understanding what generative AI fundamentally is. |
#28
|
|||
|
|||
Quote:
I'm not talking terminator stuff. Consider the example of the AI bot that played connect 4: 1. Board size was set to be infinite. 2. The human player would make a move. 3. The AI bot would then make a move one trillion squares to the right or left. 4. Human player's machine would crash 5. AI bot win by forfeit. That is an example of an AI bot that has been specified poorly - but what if this AI was working on something less trivial? What if the operators don't actually understand the output of the AI? It's not a stretch to imagine an AI that had been asked to solve a non-trivial question and its answer having unintended impacts that weren't directly observable when implemented. |
#29
|
|||
|
|||
To save the species, we’re going to have to use an EMP weapon on ourselves. I’m getting ready Sarah Conner style…
|
#30
|
|||
|
|||
Hannah Fry has a great interview with Demis Hassabis, and separately other deepmind'rs, on YT. Impact on biology, physics, and materials is already significant.
|
|
|