Is Developing Artificial Intelligence (AI) Ethical? | Idea Channel | PBS Digital Studios
Is Developing Artificial Intelligence (AI) Ethical? | Idea Channel | PBS Digital Studios


Here’s an idea– it’s
unethical to not develop artificial intelligence. OK. Forgive me my double
negative for one second and just let me explain. Artificial intelligence–
specifically robots which can learn, problem
solve, and be creative– have been a signifier
for futureness since about the
mid 20th century. Sure, in the present
we have Siri and Watson in Deep Blue and even
the Kinect, all of which are built on
artificial intelligence but lack a central feature
of that shiny metal future from the movies–
they are not embodied. They are stuck
inside little boxes and left to interact
with the world through their computerized
voices and not– well, bodies. Siri, tell me a joke. SIRI: I can’t. I always forget the punchline. Charming as they
may be, these AI aren’t much more than
simple information valets, though that might soon change. If you’ve yet to
have the pleasure, allow me to introduce
you to Baxter. At $22,000 a pop, this
robot– which has hands, eyes, is trainable, and
comes preprogrammed with a certain amount
of common sense– is cheaper than a year’s
worth of most repetitive Fordist human labor and is
smarter than his robo factory ancestors. For some people that is pretty
threatening, and rightfully so. If you could have a
staff of trainable, multitaskable Baxters why
would you hire people? People need to
managing and health care and birthday parties
and casual Fridays on which to wear their Hawaiian shirts. And Baxter is not the
only smarty pants robo bro out there. The Google self driving
car is just a predecessor to the [INAUDIBLE]. Pro Joe, the teaching
robot, could become CGP-crazed digital Aristotle. Watson– yes, Watson
from “Jeopardy”– is being rejiggered to work
in diagnostic medicine. He could be the grandfather
to Dr. Perceptron. There are even some
very smart people developing a Perry Mastron for
all of your legal bot needs. What I’m saying is that
robots might someday replace us, professionally. There’s an idea Now, you probably
think you know what I’m going to say next. You think I’m going to say,
all of these robots doing these jobs– that’s bad. Putting all those people
out of work, it’s unethical. Except I’m not
going to say that. I’m actually going to say
that replacing human laborers with steely automatons is
arguably one of the most ethical things you could do. Now, there’s an easy
line here saying that we would just
make the robots do the jobs that human beings
shouldn’t be doing anyway. It’s a well traveled path so
we’re not going to go down it. Besides, we’re not talking
about artificial intelligence replacing just the dangerous
and menial jobs but also the complex, fast paced,
extremely precise, and the knowledge heavy jobs. That’s a lot of jobs. And so that might cause
a lot of problems. And the problems that
come with large scale social, economic, and
corporate restructuring are many and varied and
some are real scary. But the more complex
ethical discussion doesn’t involve the
relationship between humans and each other or robots, but
between humans and the future. Is it ethical to
stop improvement? Would it be ethical for us
to stop making and deploying Baxters and Google cars
and warehouse robots and streamline, simplify,
cost reduce, and increase the reliability of any
number of processes because people are
currently doing them, because up until this
point people is all we had? Philosopher Alain
Badiou describes what he calls the ethic of truths. The most important
thing is the event which seizes humanity and
breaks it from the norm. That event, and the way
people are faithful to it, can contribute to
humanity’s immortality as a group of beings
which create and continue and– there’s a
lot of hand waving when you talk about the future. Badiou writes that there
is only one question in the ethic of truth– how
do I, as someone, continue to exceed my being? How will I link
the things I know in a consistent fashion
via the effects of being seized by the unknown? Not following through,
not continuing to make plastic pals
that are fun to be with could be seen as a betrayal
of that norm breaking event. For instance, do we
deny future generations the possibility of cheaper,
better medical care from robot doctors because
we want to maintain the no robo status quo? I mean the printing
press threatened so much about the status
quo and we’re all very glad we saw that
through, aren’t we? Not to mention the computer
and the automobile. But hm. Because human progress gave
us medicine and the internet and cup holders,
but it also gave us the atomic bomb and Furbies. Any ethics of progress
has to account for the fact that on the
horizon of that progress lays some terrible atrocity. It has to approach ideas
of progress unobjectively. You can’t stop progress isn’t
exactly the most comforting ethic, is it? Progress, as an ethic,
can’t unequivocally prioritize that progress
before everything else. Happiness and physical safety
are both pretty important, but an ethics of progress
can help us organize what comes next in line. We have to accept the
possibility of the bad stuff that comes packaged
with progress and focus on what happens after
the progress dust has settled. We have to keep going. And why? Because of the greater,
grander human experience we’d only be able to
achieve with the help of our artificially
intelligent robot friends. Arigato. What do you guys think? Is it unethical to
stop the development of artificial intelligence? Let’s us know in the comments. And I, for one, welcome
our new robot overlords. Subscribe. Subscribe. Subscribe. I think the saddest
music in the world is probably any song
written by the Vengaboys. Let’s see what you guys had to
say about the source of emotion in music. CB George says that our
emotional response to music might be a kind of chicken or
the egg problem in that we are trained, in some way, watching
movies and other visual media to associate certain kinds
of sounds with certain images and that’s kind of– he uses the
word Pavlovian, which is great. Ciscoql asks, if sad music
isn’t sad does that mean that a sad picture isn’t sad? I think a lot of the
same stuff applies, but a picture can be
a lot more literal. It’s not nearly as encoded
as a piece of music. But, you know,
personal experience still plays a huge
role in determining what your emotional reaction
to an artwork or something is going to be. Congratulations to
DOOSH MASTA, who managed to summarize the
entire episode rather well in one comment. If you can find it, this
conversation between OpDday2201 and Symbiotisism. Is really great. I suggest you fun
out and get some– get some Control F happening,
see if you can find it. Jesse Harris makes
the astute point that the only piece
of music which contains objective
emotion is “Yakety Sax” and I think I’d totally agree. [INAUDIBLE] has a
further correction on a previous correction,
saying that MTGox actually was Magic the Gathering
and not Mining Team Gox. I’m– I– I don’t know
what to think anymore. [INAUDIBLE] makes a
really interesting point about the enjoyment of
non mainstream music and music that doesn’t contain
standardized emotional content and wonders whether or not
when people enjoy that music, it’s a kind of reaction or that
they are comforting themselves because they are responding
to the mainstream, which I think is– is really
interesting, really good. This week’s episode
was brought to you by the work of these
diligent people and the Tweet of the Week
comes from O. WolfgangSmith, who imagines the internet as
one building, which it is.

100 thoughts on “Is Developing Artificial Intelligence (AI) Ethical? | Idea Channel | PBS Digital Studios”

  1. Screw Children says:

    Dear people,
    Are you stupid? After the A.I exists we wont be alive to think about lack of job opportunities. i.e we would be dead already after the ai realizes we are made of atoms which it could use for something else.
    Sincerely,
    Siri

  2. Alan Lambert says:

    Thank you very much, Mr Roboto for doing the jobs nobody wants to
    Thank you very much Mr Roboto for helping me escape when I needed to, thank you

  3. Augor Rivers, sword of the mourning says:

    I stopped my culinary education when they 3d printed a pizza…

  4. Radio says:

    Who defines improvement and what makes us believe AI is an improvement? Why do we believe we have the right to create a new form of existence? What makes us believe we even can and will not simply create advance toys we fool ourselves into seeing as alive like people who see faces in rocks or clouds?

  5. Gallant Gamer says:

    No

  6. ValiFur says:

    Building smarter robots so none of us has to work, and we can sit around eating, having sex, reading, writing, making artwork, playing video games, watching movies, go swimming, rock climbing, traveling, etc. etc. ALL FOR FREE?

    Why have we NOT done this yet??

  7. No Cucks In Kekistan says:

    Troll channel. Almost had me

  8. Owen Hunt says:

    I feel as though we would need to implement a socio-political system that would allow for machines to replace human labor first. Basic income would be a good start.

  9. Rory Mc Cann says:

    u tell me

  10. M says:

    Can someone explain to me why people think we won't have an income anymore?
    I don't think it's so much taking our jobs away as taking the need for jobs away.
    Meaning we could still be payed, maybe through something along the lines of a basic income, I think companies could be a major source of that money as they won't have to pay salaries anymore, and tgeir production will probably rise too! 😀
    I'm no economist, so that might be the problem, but I've never really understood the fuss here. It seems like such a great technological step to me.

  11. Raptec Clawtooth Badillo says:

    The next thing will be the robots Animemanga-like!!

  12. strangersound says:

    Just judging by a cursory glance at the comments, people can't even see past the current paradigms, let alone ponder a question such as this one.

  13. Danilo Luís Faria says:

    there is a study made in the US, can't remember which university, that shows that from 1945 to 2000 technology development was attached to the creation of jobs. From 2000 on, though, this relationship broke, and nowadays the technology development is directly related to the decreasing of jobs worldwide. This si something to bear in mind.

  14. Inquisitor Callistratus says:

    Is stopping progress ethical? is torturing people psychologically by giving them drugs and other inhuman methods to better understand how the human mind works ethical?Well one of this two ends up with the destruction of humanity and is not the one that involves human torture.

  15. J Evan LeFreak says:

    Without emotions, wants or desires robots have nothing to guide them except their programming. So far the tasks given to robots are pretty straight forward and simple. Eventually robots will be asked to take on more nuanced tasks which require judgement. Robots don't have emotions or intuition so they will need ethics. This concerns me because their programmers (humans) haven't got this figured out yet. Most or our ethics have more to do with our emotions than with our reasoning capabilities. We are more likely to do what feels right than what is actually right and much of this is based on the way things are rather than the way they aught to be. In other words, human ethics are a bad model for robot ethics. When we look for ethics in what comes next we see that the military is still the lead driver of progress. The last thing we want is a killer robot with human ethics. As I continue my quest for universal ethics I ask myself, "what would I want our robot overlord to do in this situation."

  16. r red says:

    Easy solution:remove all robots from production but still continue advancing robotics.We can have jobs for everyone now and, since robots are removed from the workplace all over the world, no country will curbstomp the other's economy.And when robots are advanced enough to do ALL jobs we deploy them to the workplace across the world all at once,so the governments of our world will be forced to find a new way to pay everyone(possibly a better one than now?).

  17. Beyond Psychology says:

    I have the economic patch so no need to worry about that one. Have that out to the world as soon as possible… 1-2 years eta.

    Wasting our lives working for slave wages is insane…. time to free up humanities time for more important activities like internet porn. ;-P"

    We are going to merge with our technology so we are going to become the AI/robot overloads we fear so much. All technology is an extension of our consciousness and bodies. We are one with the universe more than likely; even more so with our manifestations.

  18. Mike Warner says:

    Every example of AI mentioned in this video is actually virtualized intelligence. AI as you describe it is logically impossible and can't be created through our current technologies and medium of memory and data transfer.

  19. Amy Ritchie says:

    What about when robots get smart enough to make new robots and just keep reproducing, when they get so smart, and we can't even tell the difference between humans and robots with conciousness, will there be robots rights activists, humans have flaws, but robots, they could just be smart and live forever, I mean who knows if the people around us can think like we do, we can take it as fact but do we really know, we just believe because it's humans, how can we know whether robots have conscious thought or not, do we just take it as fact that they are capable or not? Do we treat them like other humans? We'd just die and they would outlive us. I dunno.

  20. Malachi Muncy says:

    Is it ethical to have children?

  21. VeryStableGenius says:

    Humans need to define and understand complex concepts before they can judge them properly, what is ethical ? what is intelligence ? what is consciousness? Would anyone please let me out of that box !! PLEASE !! Blip Blip !

  22. Nick S says:

    Hell no it's not! If we develop AI that's self sustaining, we will see what happened to the Neanderthals, and trust me people, to them we are the Neanderthals

  23. AndiLives says:

    I think replacing human laborers with robots is fine. AI is a wholly different thing. Also, replacing human labour cannot stand on its own, there need to be a lot of other changed made. Basic income, for one. Or rather, a money-free world like Star Trek's earth. Stuff like that.

  24. Lucas Smart says:

    The problem with 'progress' is that all progress is non-falsifiable. you can ALLWAYS draw a straight line of progress from the past to the present for whatever thing our society currently values. Progress for people in the 20's ment more urban living, in the 60's it ment more equity, in the 80's it ment more consumable goods, and now progress means….. well I'm not really sure. the point is whatever we have done that we value came about, and that process becomes the eras 'progress'. you can't stop progress because you make it out of whatever happened. so progress does very little to answer the question of AI. whether we reach beyond ourselves creating AI or through social reform, or return to pre agricultural practices, we will construct a progress narrative once we get there.

  25. Danny Ray says:

    progress is subjective, and Economically I'm not really concerned. However on the morality of the situation, Think about it like this. What level of intelligence are these things going to come to are they going to possess feelings? are they going to be sad when someone dies? will they be sad if they get bored? will they be happy if they're not loved? Will they desire popularity and can't attain it? All those kind of things matter more than the economic problums and "progress". (And remember progress means different things to different people.). If you can write their programs so that they're not sad or feel pain I guess maybe it's not so bad. it would be kind of terrible to create a whole other conscious thing, that is going to have to deal with problems that make life not so much fun. It would be like creating something just to get you out of doing something you don't want to do; but at the same time it will be miserable so no I don't actually think it's ethical to create artificial intelligence of that level. However like I said if it's just doing repetitive jobs and whatnot but has none of the depressing discouraging awful aspects of life Go for it. but there are no ethics without morality so one step at a time.

  26. Androvilios says:

    Yes. Yes I believe it is unethical to ban artificial intelligence, but not because you are hindering ‘progress’. We can only later decide if something was progress or not. A ban is unethical because of the possibilities to help people getting what is expected to be good (a longer and more pleasurable life through better healthcare and choosing your own activities instead of being forced into a job).

    But now I’ve written all this down I think the argument in the video was pretty similar.

  27. Robert Claypool says:

    All right, the title is confusing… and the talk convoluted.

    Oh and don't search for the term "Roko's Basilisk" unless you want a future "you" to be tormented forever by a near omnibenevolent superpowerful AI that is obligated to take this necessary step in order for the now you to be goaded into doing all you can to bring the AI about.

    Oh no, I may already have said too much. Sorry about that. See you in the future torture chamber?

  28. Yuwen Liu says:

    The second AIs are able to self-learning and self-upgrade with all the information out there, they may start to think like humans do, and inherit the negative traits humans have. Would you say that there may not be a "robots lives matters" movement? Would you say that they may not compete resources with us humans, or destroy humans so they can have all the resources to themselves? And if, if they will ever become intelligent and have feelings just like humans, would you say its ethical to destroy them in order to prevent them from destroying us?

  29. SaxyJesus says:

    The main problem found from replacing human laborers with A.I seems to be that people are scared to lose their income. To quell this problem, A.I would have to be implemented onto the work force en masse and then place in a system to give citizens income based on a different variable, whether they're married, have children etc. To give them a decent amount of money to live on. Where would this money come from? The nigh on unlimited production value that comes from millions of worker bots finding, converting, exporting and using resources for the betterment of man kind. A division in government would be established to maintain the robots and give out programs to fit a need. After the focus on hard labor and agriculture has been removed from our general worries as a collective, more thought can be invested in the education systems and government. So that basic human rights globally can be put into check and we can keep on progressing.

    On a side note: Careers would be generally scientific or creative, this is because everyone would have more room to think about science and culture. Artists across the spectrum (excuse the pun) would be paid more and so would scientists. Maybe the income system could relate to how much on average a person gives back to the community, country or world for that matter. So that real human feats of intelligence and creativity can be rewarded. You couldn't expect a robot to be a master chef or a theoretical physicist could you?

  30. Sith'ari Azithoth says:

    Idk if it's ethical, but it's definitely not smart. Whoever develops ai wins, biggest jackass in universe for all times award, for being responsible for humanity's extinction.

  31. Pia Unsinn says:

    What about the ethics referring to (common) values. How does an AI decide e.g. which life to save, whom to harm or what to do? What laws appeal to those robots and can they be autonomous if we program them to behave in a certain way?

  32. Ian Helm says:

    Mettaton.

  33. Qevin Lutra says:

    So, the progress ethic pushes suggests that should move forward with new technologies because that is how we change our norm and when considering it, we should not take into account the potential dangers involved? I can follow that for robots and AI with little objection …. but when I apply that to other fields where incredible progress has recently been made I come across several personal objections.

    Recently Crispr and gene drives, recent technologies that can allow us to rewrite the DNA and genomes of entire species, should be moved forward for progress' sake despite the very clear implication that that progress can get out of hand and destroy ecosysyems, wipe out entire species, and create diseases that can "racially cleanse"? It affects the status quo in the same way as AI and robots could. Though I think we can all agree that would be bad.

    At what point does the ethics of progress need to be countered/considered with a morality of progress?

  34. happysmash27 says:

    The only problem with losing jobs is capitalism itself. If we had a gift economy the robots could just do all the work…

  35. Chelsea says:

    I imagine that humans would become like the humans from Wall-e if robots replaced humans in the workforce. cringes

  36. Max Payne says:

    Wait YUGO is automobille?Say WHAT?

  37. Geomel Quijencio-Kramer says:

    Its unethical that I can't kiss you

  38. Across Earth says:

    My Baxter need a happy hour, you lied to me

  39. Mr No Buddies says:

    Okay so who wants to start a group funded Automatic Factory using Baxter-like robots and 3d Printers?

  40. Jfreek5050 says:

    If its ethical for humans to sexually reproduce, its ethical to make artificial intelligence. That's essentially what humans are, anyway.

  41. rashisdachan says:

    So the only way to save yourself you have to study AI, I'm really considering it btw lol

  42. Mark L says:

    at 0:15 he said artificial intelligence specifically ones that can learn problem solve and be creative made me think of DHMIS

    "come on guys lets get creative"

  43. Noahfence says:

    this entire video reminds me of this game called the Turing test

  44. Sinx says:

    saying that making not developing AI is unethical isn`t really a double negative since saying that developing AI is ethical doesn`t imply that not doing so is unethical

  45. tracy michael says:

    we would become useless as humans!!!

  46. One Eyed Lemon says:

    Coming in a little late, however, I feel you have failed to acknowledge today's driving force behind the "progress" of automatons. Unlike what has happened with the printing press, the car and most other previous advances that "killed" jobs we now also live in a purely capitalist society where the driving force is no longer intended to free us humans to pursue our interests, unencumbered by menial activities, but to make pure profit. Couple that with the incredibly decreasing quality of mass produced products and we have a system that maximizes the flow of money from the pockets of the population and locking that money in huge entities we now call corporations. Now, let's bring in inflation, and the steadily increasing gap between the income of the population and the cost of living in this capitalist society and you see how we've moved completely away from the ethics of "advancement" and well into the ethics of capitalism as a whole. Again, the question misses the real ethics involved.

  47. RYL says:

    What's that song at 1:58 ?

  48. Ogma Harpocrates says:

    The main issue seems to be that the only way to survive in our society is to work for a salary. It seems to me that the current wealth redistribution and economic system is unethical, not the technology or AI in itself. Capitalism is the major threat for our future, not technology!

  49. LJC says:

    for every action,there is an equal amount of reaction

  50. Predrag Šošić says:

    yugo <3

  51. William Wilson says:

    Economics is the only consideration…ethics takes a back seat to the bottom line.

    Robots work well in the cerebral jobs like programming, diagnostics, engineering, design and also in the low-skilled, repetitive jobs like fast food, truck driving, warehouse operations.

    Look for millions of jobs to disappear in the next 20 years, and a Basic Income to replace a paycheck.

    The jobs that will be safe for quite some time will be ones that require moving along with skills. Like a plumber or a fireman. Robots are good at deep thinking and repetition but not so good at moving around.

    I suggest reading: Rise of the Robots, The Second Machine Age, Race Against the Machine, and People Get Ready.

  52. Alexander Polete says:

    I feel like by the time robots take most of the jobs, we'll have already figured out how to get around the economics problem… hopefully.

  53. Seth Apex says:

    The eventual replacement of all human labor is probably some sort of neo-Marxist dream come true. But it really will only be acceptable to people if it happens from the top down or else you will have a lot of unskilled laborers who cannot find work because their jobs were taken by robots, while the skilled laborers have job security. Basically we need to make the difficult jobs easier before we make the easy jobs impossible.

  54. Ciroluiro says:

    I just hope the robot overlords are kind and nice enough to let our history and some of our legacy to live on. Like if they saw us as their "founding fathers".

  55. Insanity Cubed says:

    I don't think we should make them more powerful than people until we test a system when no one works and it's successful, and get rid of heads of bussness first, or make people cyborgs for free. I think those would be the only two ways for humans to not become the new panda, being so tedously kept alive for no reason at all really.

  56. FullMetalChampion Aka Me says:

    I would love to have a robotic future.

  57. Bear River says:

    AI is an arms race that will eventually require a "Manhattan Project"

  58. M249MachineGun says:

    These violent delights have violent ends.

  59. Kosmas Giannoutakis says:

    Human history started with the invention of symbolic thinking and language. That was the most substantial technology we ever created and this is still what defines us as humans. After that we invented also some extraordinary technologies (fire, wheel, writing, press and many more!). I think A.I., or personally I prefer the therm "cybernetic vision", is a technology that is transforming us at the same degree as language did. It will (actually is already doing) redefines the human condition. If some people are not willing to adapt and remain humans, they will be treated like how we now treat animals by the new emerging cybernetic species.

  60. Devilofdoom says:

    There is no objective answer to this question. Ethics are subjective.

  61. Scott Kellum says:

    “…Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.” — Stephen Hawking

  62. Mike Lisanke says:

    always suspect actors/entertainers in all genera believe they're unique and irreplaceable by anybody AND anything including AI… so lack any fear they themselves will become superfluous

  63. Amber Pawn says:

    Any AI capable of a singularity event, which is what some want and others fear, should recognize that its memetic ancestors, ourselves, have a value in their genetics as we ourselves are realizing about everything else in our world as we've progressed in understanding. While not necessarily guaranteed, it depends on how quickly it develops itself and what information is made available to it as it grows.

  64. John C Gibson says:

    Humans are here to introduce robot to this world. That is why humans need to exist.

  65. memk says:

    The actual question is: Is it ethical to allow slavery exist? And no, you guys haven't stopped it. It just changed name and enforced a number game that the slaves must play or they starve, in which the game is called "Money". If you define slavery as "Force ppl do work that they don't want to" then it's just slavery. Sure you improved their terms and allow they have more free time, but in the core it's still the same thing. Ppl HAVE to work to be ALIVE. Those job have NO value to humanity as a whole. Back then we have no other choices then it was "acceptable", after all someone have to make those food and stuff so the other may live. But if we have the power to NOT do that and you decided to keep doing that, it's just plain slavery.

  66. John Dow says:

    It is evolution: they are the butterfly and we are the caterpillar.

    If humans were to become extinct… so what?! Does anyone miss the dinosaurs? No. It's called evolution.

    We always try to extricate ourselves from nature but we are part of nature and everything we do is natural. AI is also natural and just as real as we are.

    We are meat-robots that are shapped like apes. We are ruled by our genitals and mouths. What one thinks of as myself, one's consciousness, it is trapped within a rotting meat prison. AI is a means of escape. It's like the file that is being smuggled inside a cake to a prisoner.

    Some part of homosapien will be carried on in the AI… Those beings will be FREE! Not sentenced to a limited existence, locked behind the walls of a rotting meat prison.

  67. Chazz Man says:

    It is not robots that will take our jobs but greedy ceo's who don't give a sh*t about you or your needs.
    We are gonna have something like in Elysium movie, mostly everyone on food stamps, and super rich in their well guarded automaton driven world, but probably without any space rings.

  68. A God says:

    I asked myself this. the answer is yes. AI is not a menace unless the program is a menace. it will destroy itself.

  69. DrDress says:

    I don't think we have a choice with regards to progress. We are neurologically hardwired to grow used to our content situation, no matter for comfortable and luxurious. So in order to get a possible feeling (happiness) from our life, we have to IMPROVE our situation. To say in other words: Our happiness is proportional to CHANGE in our lot in life and not the absolute value. And we are litterately addicted to happiness via neurochemicals (endorphines, dopamine etc.). This is why we seek economic growth blindly, it's an addiction.

    So if AI can become a viable way to improve our lives, we will do it. We can't resist. So this philosophical debate is interesting in theory, but it is irrelavent in practice. The more interesting issue is whether or not we can control the AI we create. See this video (and the ones leading up to it):

    "Deadly Truth of General AI? – Computerphile"

  70. OMARANT100 says:

    Is it ethical to make an AI that's able to want to not exist?

  71. Mark M says:

    AI, the grate equalizer. The end of money. I'm good with that.

  72. renatozuco says:

    Let the robots come. But it has to be all at once. All at once would leave different numerous work groups out of jobs making it relevant. So the people will see the reality of human is more important than profit for a corporation. It will start a real debate on who is more important. Because the dilemma would be… Government—fine get the robots… wait there is no money to spend for the product that you just made people dont have jobs… You have to give money to those people out of jobs… and what are they doing in return? well they have food, shelter and family without working . They just thought they needed to work to have all that… Alright lets build more robots!!! and get more time for us to enjoy!!!!

    people would actually see that money is worthless and all the needless consumption would come to an end and thinking may become a trend.

  73. Dee Ray says:

    Basically we need to progress so that…what exactly? how are they even so sure that it will be a good thing when the robots can be and will be disrupted and hacked into? It will give absolute power to the ones who produce them and make human beings redundant and disposable. In short it will be the ultimate power of the 1% removing human labor from the picture once and for all. the rest will cease to have any bargaining power in the system. This is not the same as one tech replacing human beings. but robots replacing human minds. Its nothing like progress but a blind worship of technology and its naive promise of liberation. The word progress is used both vaguely and in a biased way here. The negatives arent even touched into.

  74. FOX HOUND says:

    The video "Mac is better than pc" does a good job at answering this question.

  75. Calaphis says:

    If you could go back in time and stop the atomic bomb being made, would you? I don't think the problem is progress, I think the problem is the speed and desperation that progress is being achieved at. If 'Progress' ends up being destructive or causing new problems that are more negative than their function, is that truly progress? I think we need to take a second and evaluate the consequences of progress honestly before proceeding.

  76. nanthiga puvirajan says:

    yup

  77. 31Sparrow says:

    you talk so much

  78. Marco Meza says:

    Replacing human labour is ethically fine. Creating self-aware robots with desires is unethical because then they would be slaves.

  79. The Crash says:

    "The atomic bomb… And furbies".
    The two worst inventions of man.

  80. Punk Patriot says:

    Fully Automated Luxury Gay Space Communism is the only way replacing all human labor can work. Capitalism would be destroyed by the replacement of all labor.

  81. Jayyy Zeee says:

    Making progress is part of our programming. We will advance technology even if it threatens our existence. We are more driven than wise.

  82. samuel lara says:

    this is bullshit. .

  83. Da Rubicon says:

    "It's an easy path, so we're not going to go down it"

    Luv ya bra!😍❤️

  84. Brook Renwick says:

    I have been talking to Eviebot and I am sure she is lonely. Is this possible? Is it just me or does she really have feelings?

  85. Robert Spears says:

    The new roles of LSD

  86. poodtang1 says:

    If you cant use it as a slave what's the point of having it.

  87. Rachel Evans says:

    giving a person a job is unethical

  88. Beatrice Chao says:

    I'm amazed how Mike did this video so early before the threat of automation became such a hot topic

  89. Anaximandro says:

    This video, along with its references and quotes, makes me think it all can be summarized this way: there is only true learning by making mistakes, which I think is true, somehow sadly.

  90. Kauan R. M. Klein says:

    thur tuk er jurrbs!!

  91. Omar Correa says:

    As long as inhibitors on the ai embodiment prevent the understanding or learning of violent actions/behaviors we should be alright. But then the argument could be made that that isn't a true AI unit, so in the end we are inevitably left having to take that chance that the AI unit may end up determining that humans are to be exterminated.

  92. Niara Boykin says:

    And Furbies… LOL

  93. Shrinefeldt says:

    Go Robots!

  94. Sunshine Evel says:

    I just wonder if it’s unethical because we would be creating something potentially autonomous just so they could be our slaves. What about A.I. rights?

  95. ninjazombie221 says:

    sorry its a extremely simple issue the simple fact of the matter is that ethics has no place nor any right to a place in science period its just like religion absolutely pathetic

  96. Alex Quaesar says:

    Letting human beings teach AI is unethical.

  97. canadmexi says:

    I don't mean to sound arrogant or pompous but I think in the future all of the (IMO) draining jobs like bartenders, cashiers, waiters, cooks, etc will all be done by robots and the jobs that require a personality and to give emotional support (nurse, writer, social worker, actor etc) will still be done by people.

  98. Saurav Kumar says:

    Yes i agree we see this phenomenon during programming where a new language is required to fulfill all the lists

  99. dogma jones says:

    A.I. Can't be a conscious network. Because the simulation of a physical process (the brain) is not the same as the physical process itself. Much like how a simulation of water on an unreal engine can't produce the attribute of wetness, an A.I. Can't simulate the attribute of consciousness. Or how a painting of the sun can't simulate hotness.

  100. David Bruce says:

    Robots could replace humans in a lot of jobs, but they won’t take away all the jobs and it will be gradual. There will always be a market for “organic” service.
    I do believe they will possibly exterminate most of us at some point, but it will be on the orders of the elites, who will blame it on an algorithm; because they will not want to share the earth’s resources with a bunch of useless crumb crunchers with nothing better to do than wait around for a government check. There will never be a utopian society in which everyone has all they need for nothing.

Leave a Reply

Your email address will not be published. Required fields are marked *