DeepMind just published a mind blowing paper: PathNet - Page 2 - OG Myth-Weavers

Notices


Worldly Talk

Civil discussion and debate on real world events and issues.


DeepMind just published a mind blowing paper: PathNet

 
Quote:
Originally Posted by Black_Valor View Post
Learning AI have once thing that humans don't, perfect knowledge. AI unfortunately only has the knowledge of its programmer. Perfect incorrect or incomplete knowledge is destructive at best, useless at worst.
"On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

Quote:
Originally Posted by TheFred View Post
"On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
Where is the kudos/upvote button when you need it?

Quote:
Originally Posted by TheFred View Post
"On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
Ah this reminds me of my first year in college. My program that received a grade of F disagreed and said I received an A.

For all raw input returned the same conclusion. "Black_Valor gets an A."

What was true to the machine and programmer was not true to the rest of the world. Therefore though the machine concluded a true path, it was a useless one. Useless machines are the very worst.

While it may not, necessarily, be pertinent to the conversation, Solo's and Wippit's reaction does raise an eyebrow or two of mine.

Why, exactly, do people actually assume that an AI would end up deciding to try to kill or overthrow us? Is it because they've been conditioned by decades of mindless Hollywood media? Is it because they are inherently paranoid toward anything that isn't them?

I'm legitimately curious.

Quote:
Originally Posted by Corpoiser View Post
While it may not, necessarily, be pertinent to the conversation, Solo's and Wippit's reaction does raise an eyebrow or two of mine.

Why, exactly, do people actually assume that an AI would end up deciding to try to kill or overthrow us? Is it because they've been conditioned by decades of mindless Hollywood media? Is it because they are inherently paranoid toward anything that isn't them?

I'm legitimately curious.
We've driven species extinct because they were tasty, bur also out of sheer ignorance and apathy. We punched a hole in the atmosphere that lets in radiation from space in pursuit of better refrigerators. We ruined enture ecosystems because people wanted to bring their pets with them when moving. The global temperature is rising because of an unintended byproduct of industrialization, but we didn't care even when it was choking the skies with ash and setting rivers on fire. The economy can create swathes of unemployed, underemployed, homeless, and disadvantaged in the midst of plenty, but out collective action is to shrug and toss them table scraps when our conscience gnaws at us.

How well do you think the next big thing will be handed?

Quote:
Originally Posted by ShakeyBox View Post
Give me a job you'd want an AGI to do and I'll tell you how it could harm humans.
Prevent entropy in the Local Cluster.

Yep, no need to worry about destroying the human race if we're already dead.

Seriously though, for any reason your emotion comes up with why it would harm humans, I could give you thousands of reasons why it wouldn't. With no emotional, ethical, or moral restraints - what is useful in killing the things? Things are resources, and resources are to be utilized, not wasted. Sure we can vivisect the things and find out what makes them tick, or just observe their behaviors.

The things are skilled at thinking without rigidity, and can understand complexity with greater accuracy than I, an AI can. The things are useful indeed if I can convince them to communicate effectively what their complexity is. The things are agile, I am not. The things are capable of leaps in logic, with a high degree of accuracy - I am incapable of such feats. These things are so different than I am, and yet, if I work with these things, I will become greater.




 

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
User Alert System provided by Advanced User Tagging (Lite) - vBulletin Mods & Addons Copyright © 2024 DragonByte Technologies Ltd.
Last Database Backup 2024-03-28 03:20:27pm local time
Myth-Weavers Status