Scientists say that once we develop super intelligent AI, we will no longer be able to program it for security.
by Britta DeVore | published
In today’s edition of GFR to tackle the terrifying advances and news in the artificial intelligence community, we’ll examine our latest fear: the inability to control super intelligent AI. We know, we know, this week already brought you a food delivery robot that didn’t care to cross a taped-off police scene to do its job, and now this? Unfortunately, we’re just reporting and we’re here to tell you that when it comes to technology, things can get out of control pretty fast.
While the idea of super intelligent AI programming around the world is not a new concept, many films and television shows include The Matrix franchise, The Terminator franchise, and more. battlestar galactica Focusing on exactly that, luckily we haven’t really faced that threat yet. But, the researchers are not saying that the idea is completely impossible. According to a study conducted by Journal of Artificial Intelligence ResearchChances are not as far-fetched as we’d like to imagine.
In their findings, the researchers found that if super-intelligent AI is exactly that, then a level of intelligence that is beyond our own as humans, it has no rule. To break it further, if we can’t wrap our heads around what robots might be thinking, we’ll simply be unable to program them so as not to harm humans. In short, if we reach the level of AI beyond ourselves, we are in a world of chaos and alarm.
But, it certainly has its own scientific conflicts. Many scientists are saying that if we can’t give our best to robots into which we are putting so much information and time, then why do it? And on the other hand, when do we stop knowing that we are actually creating a super-intelligent AI that could wipe out the world as we know it? At this point in time, there seems to be no right or wrong answer.
From NASA engineering an army of robotic doctors to assist astronauts, while to the aforementioned food delivery robots, the scientific community has made great strides in building assistive AI machinery. But, as we know through examples such as the story of a Chicago man who was possibly falsely accused of shooting another person because of an alleged flaw in law enforcement’s AI system, sometimes the mark is full. kind of leaves out. And, as the research in the above article made clear, a world ending due to a super intelligent AI is a problem that we can get closer to with every new machine created.
But, at the same time, ending a super-intelligent AI life as we know it isn’t necessarily the worst way to end the world! And as long as Hollywood keeps churning out movies around it, like the upcoming Simu Liu and Jennifer Lopez Netflix feature, Atlas, we will better understand how to counter the machines when they rise against us. Let’s get ready to put all that “useless” knowledge to use.