Possible remain safe and friendly after intelligence explosion.

Possible scenariosThe possible threat possessed by AI is “intelligence explosion”. Intelligence explosion means a computer with AI identifies the algorithms which make then intelligent, improves upon them, and then creates a successor which does the same. Intelligence explosion will make us smarter and smarter. I mean our limited brain can develop WI-FI then a machine  100 or 1,000 or 1 billion times smarter than we are should have no problem creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth. When ASI possess such great power and intelligence there are high chances that ASI will take over humans. Many researchers believe that ASI would have the ability to send humans to extinction, some believes that it might be beneficial to us, it might have the ability to bring individual humans to immortality. If ASI possesses super intelligence then humans will be in such a situation where entire humanity will watch themselves losing power over the world.Then again no one can be sure about future but anyone can predict what will/could happen, again it’s just a prediction. Possible course of actionWe can not stop technology from evolving hence, we can to stop what is coming. No matter what AI will develop an intelligence explosion will occur. We can not hide from intelligence explosion but what we can do is we can make changes in intelligence explosion, positive changes. There are modifications which we can make in such events. Friendly AIA Friendly AI is AI which is friendly to humans and which in any case cannot harm humans – one that has good impacts rather than bad. AI developers continue to make AI’s which can make their own decisions, along which they should work safely. A friendly AI research is concerned about designing AI that would remain safe and friendly after intelligence explosion. It’s harder to design friendly AI then designing a normal AI, usually solutions to make friendly AI fails because of two main reasons: Superpower: an AI would have superintelligence which is capable of doing much more than humans. AI can achieve its goals with highly efficient methods.Literalness: an AI will take decisions which it is designed with, it will not work according to its designers. It will work on its pre-written rules, regulations and values, even for a second it will not think about human values. Programming AI which will not harm usTo accomplish such goal humans needs to make rules for AI, there are set of rules which might be useful. A robot may not injure a human being or, through inaction, allow a human being to come to harmA robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Personal responseAccording to me, artificial intelligence is such a device which is no less than a god to humans, AI is capable performing such tasks which are beyond human capabilities. AI is future. Although AI has its many benefits with promising future, it has its unintended consequence. It’s simple to understand no 2 species with great intelligence can live parallelly, hence either AI or humans will come to its end. There are more chances of AI to live and human’s extinction because AI will possess more intelligence than of humans. But AI isn’t born yet and humans live right now, to prevent any kind of loss humans should take necessary steps to stand ahead of AI or restrict AI to perform certain tasks. The other option is to build friendly AI which understands human values and relations. Therefore, no one can deny the endless possibilities that come with AI and the consequence caused by that high approach, everyone has to face what’s coming to them. ConclusionThis report on AI clearly says that AI poses threat to humanity. On the other hand, AI is also equipped with superintelligence which can give quicker and better solutions to our problems. Slowing down aging, curing diseases, or immortality seems to be impossible tasks, and we really don’t relate it with Artificial intelligence, but AI will be smart enough to do such tasks. To whatever extent we have goals, we have goals that can be accomplished to greater degrees using sufficiently advanced intelligence. When considering the likely consequences of superhuman AI, we must respect both risk and opportunity.