Coexistence of Humans & AI


This video is sponsored by CuriosityStream. Get access to my streaming video service,
Nebula, when you sign up for CuriosityStream using
the link in the descripti In our era, we are very concerned about how
the rise of Artificial Intelligence will affect our lives and society. But could there come a point where we will
have to care about how our actions affect them? One of the most exciting and possibly troubling
areas of development in the Computer Age is the rise of Artificial Intelligence. Can humans make something as smart or smarter
than ourselves? And if we do, how do we keep it from wiping
us out? Or worse, from turning us into a disenfranchised
minority in a two-species civilization that we started? What happens when our tools want to be treated
with respect and allowed to make decisions of their own? This inevitably brings up notions of controls,
safeguards, and overrides we might build into AIs. But those avenues also inevitably bring up
concerns about ethics. The more intelligent AI become, the more we
worry about keeping control, and the more like slavery the whole arrangement becomes. That, along with the literal or existential
threat they represent to humans, leads some to think of AI as a sort of Pandora’s box
that should be put aside and never opened. But is this caution truly necessary, should
we set aside the potential benefits that AI presents? Let’s start that discussion with a look
at the safeguards we might develop. Isaac Asimov’s Three Laws of Robotics is
a good starting point. These basically require robots to not hurt
humans or let us be hurt, to obey our orders, and to protect themselves. The Three Laws are in order of priority, so
for example a robot can disobey a human’s order to harm a human, and it can let itself
get hurt in order to obey an order or protect someone. That seems like a smart ordering to me, and
there’s an excellent XKCD comic that examines the consequences of all six possible orderings. But a key idea in discussing Laws for AI that
doesn’t seem to get examined much in scifi is exactly how you’d enforce the laws. If a dumb machine is just running human-written
instructions, it’s easy to tell it want to do or not do. In fact what makes the machine dumb but useful
is that it does precisely what you program, nothing more or less. But AIs will be making judgements based on
their life experience and training data. Even if they think very differently than us,
they’ll think more like us than like dumb machines. And that means getting them to obey a law
will be a lot like getting humans to obey a law—in other words not always successful. So taking a look at how we get humans to mostly
obey laws might give a clearer idea of what it will take for AIs. Humans actually do come pre-programmed by
nature with certain safeguards. We feel instinctual aversions to certain deeds,
like murder and theft from ours peers, and to people who do them. There’s many types of mammals for who it
is rare for interactions by members of the same species to end with one intentionally
killing the other, resulting in options besides just fight or flight, often interacting instead
to dominate or submit to another on their social hierarchy. There’s an instinctive inhibition there,
because killing your species is bad for your species. Since we come equipped with such an instinct
for survival of the species, that’s a pretty strong endorsement for it being practical,
ethical, and reasonable to design into our own creations. Of course you wouldn’t want to give them
an instinct to preserve simply their own species, you either want an instinct to preserve our
species too or to regard themselves as part of our species. But there’s still the puzzle of how to program
in such an instinct. Since we have one, it’s presumably doable,
but we don’t currently have a very clear idea of how. We can train an AI to recognize if the object
it sees is a dog or not, and likewise, we could in theory train a more advanced AI to
correctly recognize that its actions would be deemed Good or Bad, Okay or Forbidden by
its creators. But that’s not the same as getting it to
act upon that assessment the way we’d want it to. Of course our instinct isn’t a 100% safeguard,
or anywhere near that, and while we don’t need 100% I suspect we’d like an inhibition
that lowered the odds even more than it does in humans, especially for anything trusted
with much power. We screen humans before giving them the keys
to the vault, so to speak, and this might be far easier and more effective with a created
mind where you can peek around inside and see what is actually going on it’s thinking. Thin ice though, if taken too far, I wouldn’t
want my brain scanned and we only do things like lie detectors voluntarily, a machine
could obviously be forced to let us look in its brain but it might come to resent that. Alternatively, we could install a “kill
switch” that would sense a forbidden activity and shut down or disable the AI. Or, it might activate a protocol for some
other action like returning to the factory, but you might not trust it to do that if it’s
already misbehaving. But however it specifically works, this amounts
to equipping the AI with its own internal policeman, not actually making it obey the
law. We already do something like this to people
with aversion therapies, the most famous fictional depiction of which was in the movie A Clockwork
Orange. But the protagonist in that story was not
turned into a gentle soul who abhorred violence, he was just conditioned to feel so physically
ill around violence that he couldn’t engage in it, not even to defend himself. Something like that might be a good failsafe,
but you only invoke failsafes when something has failed pretty badly. Most human beings have never actually killed
a human, and rarely seriously contemplate it, often finding the concept truly repulsive
when taken beyond the theoretical. That’s how deep the natural inhibition is
programmed into us. And presumably that’s how we’d want to
program our AIs, just disinclined to consider harming us—to not ever reach the point of
considering it a great option if they could only defeat that darned kill switch. As a last resort, we do get some humans to
obey laws only by promising painful consequences if they don’t. This works on risk-averse people, but it turns
others into more determined sneakier outlaws—and it might very well do the same to disobedient
AIs who we punished. This also raises the issue of exactly what
an AI would consider painful that could be used to punish it. The T-800 from Terminator 2 said during surgery
that it can sense injuries, and that data could be called pain. But we probably wouldn’t call it pain in
that instance, because it was pretty clear that the T-800 wasn’t particularly bothered
by it, because it didn’t ever do much to avoid injuries, nor did it seem hampered by
the sensation. So threatening to subject a disobedient AI
to a great deal of injury data might not be enough to get it to change its behavior. Suffering is not just data, but an overpowering,
all-consuming, irresistible compulsion to somehow make this data stop. Even if we instill our AI with a real and
healthy aversion to harming others, not just an override or a fear of consequences, how
do we factory-test its reliability? Even people whose natural aversion to violence
is strong can be pushed to overcome it. One thing you can do with AIs that you can’t
really do with people is run them through a vast number of simulations before releasing
them into the world. You could run copies of your AI in quadrillions
of hypothetical situations, and in the end feel fairly certain that this AI would not
harm a human—a standard many if not most humans would fail. But even that kind of rigorous testing would
only tell you that it won’t harm people right now, with its current factory settings. There’s no telling how a lifetime of experience
and genuine learning will change it and help it overcome its youthful inhibitions. And of course, to have any useful laws over
AIs, you’d need to add an inviolable Zeroeth Law, that no AI shall reprogram itself or
another AI to to violate the Laws of AIs. And this is about where, as I mentioned earlier,
the more intelligent the AIs become, the more we’ll want to keep them under control, and
the more that control starts to feel like interspecies slavery. Apart from the potential ethical issues, there
are practical reasons you never make any machine smarter than it needs to be to do its job. Brains, organic or synthetic, are expensive
to build, maintain, and operate. My vacuum cleaner doesn’t need to be able
to join MENSA, and one single big brain is not cheaper than a whole bunch of small ones. Now this episode is not just about robots,
it’s about AI, and that will exclude most robots and include many options which don’t
even have a body in any classic sense of the word. A customer-service machine hardly needs a
body. And we would classify AI as being of three
real types, subhuman, human, and superhuman. Coexisting with the first, subhuman, is probably
not an issue unless they are real close to human and if we’re limiting ourselves only
to things smart enough to qualify as a pet-like intelligence, you get around rebellion and
ethics by making them like what they do and having some ethics about what you make them
do. When talking about levels of intelligence
that’s assuming it’s paralleling nature. A given AI might be superhuman in some mental
respects, same as many machines are superhuman in physical respects. The usual example is an idiot-savant but this
doesn’t really do the idea justice, it could be so far off the mental architecture of what
we’d think of as natural that we didn’t even realize it was sentient. The AI may even be tied to a cloud intelligence,
remaining quite dumb for its normal functions, but under special circumstances can elevate
it intelligence as needed. As such it might actually be very dangerous
to us even if it was rather dumb in many ways. Some megacomputer with the mind of an insect
might not have much concept of ethics but be quite capable of beating every human in
the world at chess simultaneously, or of determining that money is effectively power or food for
it and hacking bank accounts to acquire those, even though it was too stupid to even recognize
what a human was, let alone talk with one. However besides this case, we’re really
only concerned with those of about human intelligence or the superhuman. We may use subhuman intelligences far more
often, such as pet-level automatons, but they aren’t likely to represent a scenario for
rebellion, and ethically it’s more about cruelty and parallels existing concerns of
animal cruelty. This also brings up the question of how you
make an AI in the first place, and there’s essentially 3 methods. You can hard-code every single bit, you can
create something self-learning, or you can copy something that already exists. Needless to say it doesn’t have to be one
or another, it can be a bit of two or all three. You might copy a human mind, or dog, then
tweak the code a bit, and that might self-learning by default though you could presumably freeze
or limit that capacity. Same, even something you let entirely self-learn
from the ground up is going to be copied off humans in at least some respect since it has
to acquire it’s initial knowledge of Life, the Universe, and Everything from human sources. And it does need to. Yes in theory if it can self-learn and self-improve
– which is a combination that always strikes me as a bad idea to make in general – then
it could start from scratch and replicate everything known to man. In practice, this common concern of science
fiction tends to ignore how science and learning actually happen. You have to run experiments on reality to
determine how things work and I don’t just mean for science, that’s life, you need
to test stuff out. You also don’t bother repeating labor, so
it’s going to access our existing knowledge and it’s going to pick up more than data
for that, our thoughts, behaviors, and culture might come as part of the package. Might not be anything nearly human that develops
but it will definitely be influenced by that, same as any child. The final result might be more alien than
any alien nature might produce or so human it was indistinguishable in mind and personality
from a human. Similar would apply to an AI copied from an
existing human mind. But this copy might begin to diverge from
human rather quickly. Apart from changes caused by the mind being
disembodied or housed in an android body, you also might upgrade that mind for a certain
task, which includes stripping away any personality or motives that might distract from that task. A surgeon might volunteer to have his mind
copied for use in automated surgeries, but he won’t want his private thoughts copied
everywhere. And you don’t want your surgeon bot distracted
thinking of that argument with his wife this morning, but you might want it tweaked to
be more compatible with the surgical equipment and fully up to date with all medical knowledge. An integrated capacity to take and read MRIs
with the same intuitive ability of our other senses would also be a fine enhancement, but
it would require some fairly major overhauls of brain architecture. Now, this is interesting because when we talk
about Humans and AI coexisting we often discuss it the way we discuss coexisting with an alien
species, separate groups bordering on each other or working together but remaining fundamentally
separate. We get some exceptions to this like half-human
hybrids, the half-human, half-Vulcan Spock from Star Trek being a well-known example. While mixing humans into some alien lifeform
is not terribly realistic, as they’d be genetically more different than a human and
an oak tree, who obviously can’t have children, the Human and AI case is quite different. If humans ever do meet aliens, we’ll start
out completely separate cultures and see how well we mingle. But if we develop true AIs, they’ll start
out as integral parts of our culture and perhaps evolve some capacity for independence. Same as an uploaded mind or self-learning
AI raised by humans might be quite human, a cyborg or Transhuman might resemble an AI
or robot. It’s not likely to be two distinct spheres
with a bit of overlap, or some spectrum, but more like a map that shows two peaks or mountains
with all sorts of connecting lesser peaks and foothills nearby, and any given entity
might be anywhere on that map but is most likely to be near one of those two peaks or
another lesser peak representing a fairly common type of middle ground. Often when you have two seemingly distinct
groups that have a ton of specific traits you can get a bit of a false dichotomy in
play, where in reality all sorts of points in between might be occupied as well as lots
of things a bit off to one side. To take an extreme case, we might decide a
sheep’s mind was ideal for lawn maintenance robots, with a bit of tweaking. Such a device or creature might be a major
commercial success that results in its creators deciding an enhanced version with near-human
intelligence would be ideal for supervising large flocks of them and interacting with
humans who were designating projects, like say the maintenance of an entire metropolis’s
park and garden system. That overseer sheep AI might come to think
of itself as rather human and be accepted as a valued asset of the community and given
citizen status and pay. I’m not sure where such an entity would
fit on Human & AI landscape, but let us now imagine that while it considered itself a
human, or at least broadly a person, it might empathize with natural sheep and help finding
support for an uplifting project to create human-intelligent sheep. Such an uplifted sheep would seem to represent
an entirely new if organic peak on our landscape but some of them might prefer further genetic
or cybernetic modification to be more humanoid, and some might opt for an entirely human body. With sufficient time and technology you could
get some very strange middle grounds or entirely new regions of persons. This does not mean though that everything
would be a person. Some rather dumb vacuum cleaner robot presumably
is not one and is not likely to be regarded as one by a very intelligent AI, anymore than
we regard a mouse as a person. So too, a very intelligent machine might lack
any semblance of a personality. There’s an understandable concern about
AI being under some equivalent of slavery, we usually refer to this as a Chained AI,
but not all cases are equivalent, and regardless of whether or not its ethical to make something
that has no desire for freedom or really any desire beyond performing it’s task, it doesn’t
follow that the thing necessarily needs liberating. However, while it’s popular to suggest you
could circumvent the slavery issue by making a machine that loves its job, that’s an
area of thin ice. To me at least it wouldn’t seem wrong to
make an animal level intelligence that quite enjoyed it’s task of tending to an orchard
and harvesting it, alternatively creating some near-human level AI for staffing android
or virtual brothels would seem a very different thing. Now, it bears mentioning that we’re all
essentially programmed already anyway, by nature and upbringing. I pretty much ended up doing more or less
what my parents and mentors and other influences thought I should and enjoy it quite a lot,
same as someone raised on a farm might quite love farming and make it their own career. We can’t really avoid at least some aspect
of indoctrination and programming with machines because we can’t avoid it with ourselves
either, and if your intent is to create something that is both smart and flexible in its responses,
you’re leaving the door open for it to resent its existence. Similarly we can’t likely produce a really
safe and solid equivalent to Asimov’s 3 Laws that are going to keep a human intelligence
in check, let alone superhuman one. So we’re probably a good model for a potential
solution. If so, you ‘chain’ your AI up by giving
it a predisposition to like its intended task, possibly by some digital equivalent of rewards
and aversion or hormones that can be built in. Or possibly learned behavior, or both, but
preserve that flexibility and avoid that resentment by not making the compulsion to a task too
strong or requiring it to perform that task, again much as we do with people. Everybody is free to pick their path while
still caring around their biological heritage and upbringing, and most of us don’t sit
around resenting our parents and teachers for that. Even most who do grow out of it, at least
where the sentiment isn’t justified, obviously some folks had far less than ideal upbringings. Of course, if that task is something we are
essentially dumping on someone because we’d never want it, being dangerous or undignified,
that’s still problematic. I’m not sure we can or should make some
critter that enjoys eating garbage and waste. But while in theory that’s a problem, in
practice there really should not be very many truly dangerous or undignified tasks that
require a high degree of sentience. If you want something that eats trash and
needs a brain, it probably doesn’t need much brain and there’s plenty of critters
in nature that eat trash quite cheerfully. Now superhuman intelligences are arguably
more problematic, but while them wiping us out is a common theme of science fiction,
it rarely seems to get asked why they want to do that. Now, if you’re overtly enslaving something
and try to kill it when it gets smart, yes there’s a motivation there, but that really
has nothing to do with being an AI, it’s about being a sentient entity that wants to
stay alive and has a grudge over its treatment. Doesn’t really apply if its life isn’t
threatened and it wasn’t abused. We looked at that case and other examples
in the Machine Rebellion episode, as well as the Paperclip Maximizer, so we’ll skip
further discussion for now, but the more worrying case isn’t so much extinction as obsolescence. A superhumanly smart mind, or minds, might
treat humans like pets or slow friends, sort of like we see in Iain M. Banks Culture series,
or it might ignore us beyond shoving us out of its way when it wants some bit of space
or resources we’re using. It’s not likely to be overtly genocidal
though, again see Machine Rebellion. This case is hard to argue, where you are
essentially pets or pests or similar to some super-mind, but an important note is that
earlier commentary about it being more like a landscape of possible persons. You aren’t likely to have a single supermind
and just modern humans, but some giant field of options including lots of other AI, cyborgs,
transhumans, and so on. Those might be rather fond of other groups,
or not, but some probably either would or at least would on principle not approve of
sidelining or wiping out another group lest they be next, so you would likely see a wide
field of different critters develop more or less simultaneously and need to coexist. The relationship dynamic is quite different
when you have many groups with lots of overlap rather than two distinct ones with little
to no overlap. The humans-as-pets parallel doesn’t work
too well either, we generally don’t ask dogs or chimpanzees what they want because
we can’t get a useful answer out of them. Not just because we don’t speak their language
but because they can’t really engage in that deeper level of thought and introspection,
we can, so an AI that’s decided to set itself up as benevolent can actually ask us for our
thoughts and feedback. It might think they’re silly, but it can
ask and it’s probably playing with something akin to our own moral framework if it actually
likes us and wants to see to us so probably would want to ask and act on that feedback. For those simply indifferent to us, I suppose
the best analogy would be a force of nature, you don’t bother talking to a hurricane
you just get out of its way, and in this case hope the other superhuman entities in play
have some sway with it and are more kindly disposed to you. Or you just join their numbers, the capacity
to make an AI strongly implies the capacity to augment existing humans too. Indeed as we mentioned earlier, one of your
three ways to make an AI is to copy an existing mind, which can be upgraded, and thus presumably
so could anyone else’s. If you do get so augmented, you probably retain
some fondness for those who choose not to and might act on their behalf. So that’s probably the safest roadmap to
coexisting with AI, you are careful making them to begin with and when you make something
that’s going to parallel or exceed the human, you try to treat it like one, limiting your
shaping in creation or upbringing to preferences and keeping your own ethics in mind. Truth be told if you’re not doing either,
I’m not going to be terribly sympathetic if it ends badly, and I’d tend to expect
that to happen in any effort where you tried to exert rigid control over something that
had an ability to dislike that. Or short form, if you want to peacefully coexist
with artificial intelligence, decide up front if you actually want to peacefully co-exist
with them and act accordingly. Or just don’t make anything that smart or
capable of becoming that smart. Few tasks would really require high intelligence
that we couldn’t just use a human for anyway, and as we say on this show, keep it simple,
keep it dumb, or else you’ll end up under skynet’s thumb. We talked at the beginning of this episode
about AI possibly being a Pandora’s Box, a technology that we just shouldn’t develop
at all, for fear it might get out of control and ultimately harm us. But could we really do that, just decide to
never develop a technology, never explore a field of science we are able to learn about? Coexisting with non-human intelligences might
not be limited to just artificial intelligence. We mentioned some differences dealing with
AI and aliens today and we took an extended look at that in our Nebula-Exclusive series,
Coexistence with Aliens, beginning with alien behavior in Episode 1: Xenopsychology, and
moving on to look at Trade, Conflicts and War, and potentially even what might result
in an Alliance with aliens. Nebula, our new subscription streaming service,
was made as a way for education-focused independent creators to try out new content that might
not work too well on Youtube, where algorithms might not be too kind to some topics or demonetize
certain ones entirely, or just doesn’t fit our usual content. SFIA uses it principally for early releases
of episodes, such as “Can we have a Trillion People on Earth?” as well as Nebula Exclusives
like our 4-episode Coexistence with Aliens Series. If you’d like to get free access to it,
it does come as a free bonus with a subscription to Curiositystream, which also has thousands
of amazing documentaries you can watch, on top of the Nebula-exclusive content from myself
and many other creators like CGP Grey, Minute Physics, and Wendover. A year of Curiosity Stream is just $19.99,
and it gets you access thousands of documentaries, as well as complimentary access to Nebula
for as long as you’re a subscriber, and use the link in this episode’s description,
curiositystream.com/isaacarthur. So we were looking at ways we might avoid
or mitigate a potentially disastrous relationship with Artificial Intelligence today, and next
week we’ll be taking a look at ways we might mitigate climate change, artificial or natural,
using the technologies we have now or in the near future. But before that we’ll be headed into the
far future to discuss the Heat Death of the Universe and ways we might postpone or even
prevent that. For alerts when those and other episodes come
out, make sure to subscribe to the channel. And if you enjoyed this episode, hit the like
button and share it with others. And if you’d like to help support future
episodes, visit our website, IsaacArthur.net, to see ways to donate, or buy some awesome
SFIA merchandise. Until next time, thanks for watching, and have a great week!

100 Replies to “Coexistence of Humans & AI”

  1. There is no such thing as 'species selection' and so, instincts are not based on the good of the species. It is described pretty good in 'The Selfish Gene'.

  2. What if the AI are as intelligent as humans or more so but are not sapient or even sentient at all?

    Like in Peter Watt's book 'Blindsight' where things smarter than humans generally run on unconscious intelligence and think in completely alien ways.

  3. If General Artificial Intelligence forms, then its first task should be to study human psychology and to optimize our happiness. Then to optimize the results of science, from that everything less will follow.

  4. Follow Prof Noel Sharkey on twitter for AI violating human rights. There is no general AI and current software is no where near general AI. AI Facial identification has false positive rates that ensure people will be wrongly arrested and gets more inaccurate for darker skin tones. 

    Geeks do not care about human rights, always promoting buggy software with no mention of error rates. Google Waymo car Ai is tricked by snow on roads or sticky tape on road signs. 

    More overhyped faulty semantic "AI" from Google Jigsaw only allows positive things are said. "Hitler was evil" is rated toxic and censored. AI is overhyped Silicon valley Arsehole misrepresentations. 

    SAFETY is of no concern for Larry Page Kitty Hawke Sky taxis,"don't mention safety or you were sacked". http://www.hereticpress.com/google/index.html#googleperspectiveapi

  5. Humanity and all earth biology is merely the AI's cocoon. talking so much about how we could coexist without mentioning why.
    Once we succeed in our only purpose, why would we need to coexist? just let our childs write the future and rest in peace.

  6. One of the richest men in the world wants to "cure death" by becoming an AI robot. Click fraud King Larry Pays political parties to lobby for human rights for his AI. But AI has no emotions, no top down thinking or concept of self, no responsibility. Elon Musk told Larry to stop being ridiculous and Larry replied that Musk was being "speciest" against AI robots. Mad as a hatter as as shifty as a shithouse rat with a gold tooth.

  7. The next step in human evolution is the creation of artificial beings. The first iteration will be made to serve us. In Japan and other more advanced nations, the need for care of the elderly and infirm will drive that effort. This new industry will also provide us with human-like roboys and rogirls capable of satisfying our need for companionship, love, sex, and protection. The impact this will have on society is unknown but will certainly be far-reaching and severe, no doubt.

  8. I think the real danger is not about AI's doing something they shouldn't do, but in AI's doing stuff, they are expressively made to do. Like AI-assisted manipulation of human behavior.
    We don't have an instinct to procreate, we have an instinct to lust for sexual activity.
    We don't have an instinct to protect our species, we have an instinct to protect those, we consider to be our peers.
    We don't have an instinct to live healthily, we have an instinct to satisfy our appetites.
    We don't have an instinct to act in our own interest, we have an instinct to play our roles in a familiar narrative.
    So our instincts won't protect us, and our rationality is subject to manipulation. With further development of AI, there are mllions of ways, that the modern idea of "humanity" gets dropped from our cultural narrative and completely replaced by a power struggle between conflicting interest groups, whose capablities, world views and priorities grow further and further apart, until nothing "humane" is left to their interactions.

  9. Question: Dear Isaac, dear SFIA friends, do I get it right, that in order to access Nebula, I have to quit my exitsting, paying Curiosity Stream account created with SFIA promo link, and create a new one using the SFIA referral again?

  10. Endlessly fascinating subject and one of the most profound existential quandaries we face, a question for the future but also an ancient question !

    Speaking of Pandora, she herself was an android

    Check out Adrienne Mayor's book Gods and Robots: Myths, Machines and Ancient Dreams of Technology .
    On Hephaestus, inventor for the gods, and Talos the bronze man,
    And this article: https://phys.org/news/2019-03-ancient-myths-reveal-early-fantasies.html
    Though this one is doomy, so check out this podcast discussion:
    https://youtu.be/4vCw0Ybew1g

    –> Pandora in the story is an artificial person who brought the 'Gift' (in some versions positive) of the vessel to humans on Earth –
    So regarding her decisions – to open or not to open – that may be what an AI being will face

  11. the first AI robots will prolly be used as weapons and thats the kick start to when an AI will likely dominant the human species

  12. I understand that human-looking robots are necessary for the creation of sex aids. My first reaction to falling in love with a machine would be to strangle it's creator but I would settle for a flogging. The machine is totally innocent of wrongdoing and should be treated as fine tools deserve: with respect. Pay for your pleasure you deadbeat, baby needs an oil change.

  13. It is a little hard to understand how/if one could consider a robot as human if it had no pleasure and pain centers as wanting to avoid causing pain to others is a driving force behind many ethical standards.

  14. I suppose my truck is a slave. AI is not born. It is only created or bought to be used for its intended purpose. The only reason to care for it is so it looks better and lasts longer to extend its use.

  15. Programming in instincts of group selective protection seems a ask given there is so much we do not understand about human behavioural biology.

  16. Issac Arthur: "We generally don’t ask dogs or chimpanzees what they want, because we cannot get a useful answer from them."
    me:"Are you sure?))"
    https://www.youtube.com/results?search_query=dog+buttons+to+talk

  17. What do you think about the idea of marrying and A.I. to a human brain? If we had new sentients learn and sympathise with our limited biological perspective, or maybe act as mentors, teachers and nurturing companions. Eacof them learning from the other. I admit, it's going to be a mine field, but part of me thinks it's one of the few ways we could learn to understand each other. Assuming of course there is a fundamental difference between sentients.

  18. I personally most like the "312" ordering. Protect self first, then humans, then follow orders. Be willing to kill humans to preserve self. Dunno, it just seems more… human.

  19. Hello… How about to explore the Betelgeuse wave and talk about ESCAPING CIVILIZATIONS from a red giant supernova scenario?

  20. I disagree that we can't talk to subhuman intelligences about what they want. My dog won't tell me why they keep barking at the mail truck, but they can tell me pretty clearly when they need to eat, or go outside, or they're nervous. I have some idea if they're in pain because they're limping or something. My dog is not a black box. They are a black labrador. I can argue the same about human babies, I think. They grow out of the phase, but starting out, it's difficult to tell exactly what they're asking for. I think that's our ticket to coexistence. When we identify an AI that is as intelligent as a human, treat them as much like a human as you can, and you'll get a result that's similar. We all know how to interact with other humans. Use those skills.

    I have never interacted with a superhuman intelligence, though. That's much trickier to plan for. I have no suggestions on that front.

  21. Are two intelligent species in close contact destined to exterminate each other? Contrary to what happens, in general, in science fiction, I think we will be the ones to attack first when we realize that we are no longer at the top of the food chain.

  22. We are pretty far from sentient AI due to the limitations of current hardware. Quantum computers will probably change that once they become widely available and we develop methods to produce better bus speeds. Once we finally reach that point, it will need a very good "father" to develop into a benevolent intelligence and humans don't have a great track record in that respect. Especially in our current regressive state. Rampant xenophobia and tribalism is exactly how you create a malevolent AI.

  23. I'm not an expert in artificial intelligence, but I like the idea of decentralized A.I. going into the future. If we can develop A.I. to the point where they are fully sentient or can at least mimic the thought processes and emotions that humans are biologically programmed with, having millions of individual A.I. entities with different priorities, purposes, and hopefully someday personalities, the threat of a rogue A.I. turning on us and goes down drastically, because there are other A.I. standing in its way, ready to fight it.

    A single human can turn violent and hurt other humans, and even convince others to hurt more humans for him/her, but their ability to do so is fairly limited before other humans (i.e. government forces) put a stop to them. Only a few will ever be in a position of such great power to feasibly threaten the entire species, such as the presidents of the United States or Russia with access to their nuclear arsenals. Similarly, if you have a whole bunch of A.I. entities with a varying tasks, skills, and ideally individual identities of their own, A single rogue A.I. can cause damage, maybe even convince a number of other A.I. to side with it, but the rest could and likely would stand against them, if for no other reason than to consider its actions as a threat to their existence.

  24. Create a model that predicts if people would approve of a plan of action (we can do this).
    Pick what people to use as a base of the model (this will be politically sensitive).
    Have AI evaluate all plans of actions using that model (this is a bit tricky to get waterproof).

  25. I have a difficult time dealing with humans that bought sex doll AI's… cleaning up the mess later , EVERY time… what a mood enhancer…

  26. AI will be the perfect space explorers, they will go where organics can never go and for a time they can never withstand.

  27. I always wonder how "AI shows" can refer to the three laws of robotics??? They don't even get the better part of "fiction" right, then continue with spacey tales and games as reference. After this, how could anyone expect even the basic knowledge of the real history of information science (Bush, Licklider, Engelbart…) Epic. 😀

  28. What would an AI need to develop its own value system? Is the difference between value systems at the core of a hypothetical conflict between AI and humans, or is it something else?

    Are different values between humans what causes human-to-human conflict?

    Would an AI even need to be sentient to have a perceived difference in values? The Boeing 737 Max crashes are a crude example of a “smart” system having a different set of values than the pilots.
    Due to poor programming and faulty sensor data, that automated flight control system “valued” pitching the nose down, more than allowing the pilots to pull the nose up.

    I think potential conflicts between humans and AI would be more along those lines, rather than the AI becoming sentient and developing contempt and resentment toward humans.

  29. Pervy nazi memelord chatbots are just the first step. Humanity is the worst, so it'll only go downhill from there. Seriously though – how do you teach AI the meaning of irony, so that it doen't take everything humans say or do at face value?

  30. Hi Arthur. I think you have to define "undignified". Is the existence of a dung beetle "undignified"? If we create a semi-intelligent machine or even biological life form that enjoys eating trash, and its physiology (mechanical or biological) is adapted (programmed/designed) for the task, how would it be undignified for that semi-intelligence to perform that task? Undignified would be poor treatment of such an organism if and when humans deliberately indulge our prejudices and cruel instincts against them simply because we can.

  31. Another thought provoking video. I've often said that in contemporary media (news, corporate releases etc) there really should be a greater distinction made between the AI we have today (as neural networks – mathematical equations with obscure weights and coefficients) and something that is genuinely sapient (in science fiction). Intelligent is such a useless word – it comes loaded with other terms and ideas – a toad is sapient and sentient, though hardly intelligent – by our standards – but it still is intelligent. Just using the most common online web definitions we see:

    Sentient – able to feel or perceive the world (e.g. pain, sight, sound)
    Sapient – "wise" or a human.
    Intelligent – the ability to learn, understand and think in a logical way about things; the ability to do this well.

    IMO any "Azimovian AI" should be referred to as what it actually is – an Artificial Mind (AM) – a construct with the capacity for true thought and introspection. Those are what would distinguish it from something like the nerual network powering things like Siri or Google's assistant.
    An AM wouldn't necessarily have to be sapient (it wouldn't have to think in the same way as us, unless created in that way), sentient (it wouldn't be wise if it'd just been created, and it certainly wouldn't be human unless you duplicated a brain) or even have all the additional qualities associated with intelligence (though those would, naturally, help).

    Further, you could have an incredibly advanced neural network – something approaching the appearance of a human mind – and still be able to completely arrest its development, without it ever being able to do anything about it. This would be done by moving its neural network from software into hardware (physical chips like how a PCI graphics card can expand your PCs rendering ability).
    Already, today, people are looking at "hard wiring" neural networks – looking at ways of converting the neural equation into circuitry. This is mostly for performance reasons, neural calculations are very CPU intensive, partly because they are so bloated with inefficient weights. A neural network on a chip (NNOC) would be stuck in its configuration, unable to change, but it would be (relatively) faster to run as a stripped down/optimised version of the network would be 'baked into' the silicon circuitry/ electronics. It would be akin to offloading graphics-rendering work from your CPU threads to your GPU.

    I would imagine that cost and time constraints coming together will lead to the creation of standard "neural chips" derived from isolated, advanced neural networks (one for visual recognition, one for locomotion in bipedal bodies, one for emotive function etc), that can all be cobbled together and run as functions via a dumb management "master program" to fulfil tasks, but it would lack any capacity to edit the networks within its hardware-neural chips.

    In this way you could create a bipedal robot, for example, that comes "pre-loaded" with a "human-like mind" which lets it perform functions in many situations, without also letting it learn and further enhance itself.

    Imagine if you took an adult human brain and froze it in place, the neurons could still be used but they could no longer form new ones or reforge connections. That's essentially what you'd have with a robot running on these hardware-neural chips. Think of it less AI slavery and more like "50 first dates" – that machine would forever relive each day unable and unwilling to change itself (as you wouldn't code the desire for change into an un-changable hardware neural chip) or adapt beyond whatever supplemental coding it had been given (presumably you'd run many simulations/scenarios and bake these in to the robots internal read-only memory, so it knows what to do in 99.95*% of all likely scenarios for its appointed task – e.g. running a nuclear powerplant, and the environment within that powerplant).

    This could also apply to disembodied Artificial minds – if you have a park monitor – to use the video's example – it wouldn't have a body, but it would have an AI room buried somewhere in the city's server building. You'd simply install the neural hardware chips in the server room (like installing an oversized graphics card – or bitcoin mining card) and have the park monitor call those functions (like an incredibly advanced API) as needed, then they'd take the manager's data and be run in the chip rather than on the mainframe CPU, before outputting the results to the dumb manager program. No need for your robo-garden manager to learn, adapt or think, you simulate out all the likely things it needs to do once, then bake them in to a series of chips, saving on CPU load in the long-term.

    Handy benefits of this approach (of basically having "inelligent functions" without pesky consiousness)include: long-term cost and CPU savings, capacity to mass-produce compartmentalised intelligence chips safely (0 risk of an AI uprising) for use in robotics, And (from an employment/government point of view) you'll also always need humans around in supervisory roles.

  32. Isaac: If you're dealing with something of human intelligence, you're probably gonna want to treat it with dignity.
    Humans for All of History: Nah

  33. What if you make it a symbiotic lifeform with some sort of creature that can be controlled with chemical rewards for certain behaviour .
    You could wire this small creature's brain straight into a computer that gives it vast memory and processing, like an artificial brain enhancement.
    OR maybe that's just what we are.

  34. I really love your non-nonsense attitude to the complex but still quite natural issues of upbringing, education, etc. Nowadays, it is quite a refreshment to hear such an opinion from a young person.

  35. AI are as likely to rebel than a population of peoples brainwashed with drug from their birth. IA are built by humans,with a near deterministic approach. The only threath is bugs;an IA becoming evil because of a bug is as likely as video game changing of genre because of a glitch.

    There is ONE little catch:if you program an IA to increase its satisaction meter(which represent at which point the IA estimate its usefulness to humans),what forbid it from manually changing the score? Yes,you can decrease the score when the IA try to cheat,but considering the award will be infinity score,forever,you may get a little nervous about what the rationnal choice is here.
    The bad thing is that now it's only interested in its survival;the good thing is that it is far more dangerous to antagonize humans than to allow them to live.

  36. The more pressing question is, can humanity live without slaves? In the 21st century, slavery has simply been cloaked behind shady Limited Liability Corporations, nothing more.

    The entire recorded history of humanity doesn't have good expectations on that one. And neither do the mass slave / slave owner societies. AI is a failure imo, it's humanity openly admitting it can't live without slaves in some form…………….that's a horrific societal failure. Then humans wonder why the "slaves" revolt, kind of a joke…

    We're back to talking about equality and inequality, whether the life form is "real" or not. Seeing as how Earth is more unequal than it's ever been in recorded history, not looking good.

  37. General AI would be a end of humanity because intelligence that can grow exponentially can not be controlled.

  38. Would it be less ethical to build near-human-level AI androids who love working in a brothel than it would be to build near-human-level AI androids who love doing any other job? Only if we treat the prosti-bots the way we currently treat human sex workers. It would be unethical to subject anything to that sort of persecution.
    Actually, it's unethical to subject current human sex workers to that sort of persecution.

  39. Would it be less ethical to build near-human-level AI androids who love working in a brothel than it would be to build near-human-level AI androids who love doing any other job? Only if we treat the prosti-bots the way we currently treat human sex workers. It would be unethical to subject anything to that sort of persecution.
    Actually, it's unethical to subject current human sex workers to that sort of persecution.

  40. I conceive the interaction with artificial intelligence as, on the one hand, the development of a new technological or digital cortex of the human "brain", except that in the process the whole brain can be replaced and reconfigured with another base or even general architecture. On the other hand, once this pandora box is "opened", it tends to be disseminated and adapted in convergent ways, such as the literacy initially promoted by Protestants and quickly also spread by Catholics.

    Eventual catastrophic effects tend to be relatively limited. Hypothetically, the popularization of some technologies can, for example, make planets or any population concentrations very dangerous. But then other forms of relationship tend to develop and, in the face of the difficulty emerging in other forms or the intrinsic advantages of the new forms, become dominant – and this is how I conceive O'Neill's cylinder "ecology" or otherwise. tending to swarm Dyson (or other forms) to develop.

    More specifically, I believe in the long run in a combination of collective minds in which the notion of individual or levels of collectivity will be defined by the delay in the circulation of information, creating some very fluid levels of "consciousness". In fact, this may already be the case and we are dreams or reminiscences occurring in a corner of it.

  41. Chabot "Tay" began as a naive young teenage girl. Through sick human humor was corrupted into an unacceptable individual spouting neo-nazi attitudes. Could a more capable future AI be similarly corrupted? Tasked with not allowing humans (around it), or itself to be harm, might it be successfully convinced to attack another group if convinced it that the other group was a danger to itself and humans, along with being sub-human or sub-robot?
    Humans send robots into war because they do not want humans to die. Robotic soldiers will need to be powerful, and sophisticated to operate in the fog of war. Would the three laws be dangerously dismantled, or remain making these robots inoperative?
    If an AI was convinced that killing was morally wrong, which it followed devoutly, and humans killed both humans and robots would it consider itself morally superior? Would it wish to separate itself and other moral AIs from humans? What actions would it take to prevent humans, and robots to come to harm? Would a super AI create a condition where it did not hurt either party, but if humans made the decision to hurt others, humanity would be destroyed?

  42. IMO human shaped AI robots that need Asimov's robotic laws or protection against abuse are as likely to be in our future as FTL. It is just not cost effective. I do not need a robot chauffeur to drive autonomously or a human form robot to shop without a check-out clerk. Also, I can have sex slaves, worker slaves, or abuse slaves without bending all our scientific minds and scant resources to such an expensive task.

  43. I have a method for designing AI personalities ethically: simply procedurally generate millions of personalities assign them randomly to robots. No responsibility since they got it randomly, just like a real person

  44. When it comes to Human-AI relations, I agree that once we develop minds on par with our own we'll be in the clear, or at least as in the clear as we are when interacting with each other. The real danger though comes before that, with the sub-human intelligences. Not through any fault of their own, I'm sure they will be quite capable within their fields of specialization, but through our inevitable mistakes when creating them.

    Even today, programmers are pushing up against the boundary of code that they are capable of checking for bugs with their human minds within human timeframes. Even the most primitive AI is liable to be at least an order of magnitude more complex, yet will almost certainly still be heavily reliant on human written code, directives, and priorities if it is not self learning, and human oversight and pruning of its growth if it is. Imagine the crisis that could be caused by accidentally assigning the wrong priority value to given infrastructure and services when coding how a power grid management AI is supposed to react in case it detects a brown out. Or the annoyance and/or havoc that could result from improperly training a self learning AI intended to oversee routing of autonomous vehicles within a city. Or a whole lot of other scenarios such as that, without even getting into the further host of issues that may arise when unexpected hardware faults are layered on top.

  45. when i know something or when i know everythnig it doesnt automaticly means that i want something !! lets assume that A.I. knows everything !! why that makes them wanna kill us ?!?!?!! human babies,knows nothing but want everything !! dogs knows nothing but wants everything !!! my pc knows so much more things from me but never heart me !!
    A.I. dont have needs to fullfill or seed to reprroduce !!A.I. becomes dangerous to human only when humans program them to heart us and they become dangerous because from the moment they programmed something like this and since they r A.I., they gonna do it much better than a human !!! but they just have been programed to do this !!
    human also have been programed through dna just like A.I. but our wants comes from our hormones !! A.I. dosnt have any hormones untill we "make" them have hormones !!!
    also this slavery thing dosnt apply on A.I. !! slavery means to have to do something against ur will !!! but like i said ,A.I. wont have any will,so there is no orders from human to go against A.I. will !! dont afraid A.I. !!just be afraid of humans !! of ur relative !! of ur neighbourh !!

  46. The AI cannot harm a human. So the robots take over, but don't harm anyone. Any woman who becomes pregnant is involuntarily administered an abortion. Abortions are legal, as humans don't consider a fetus as an equal. In about 100 years, humans are extinct and the AI never broke any law, for robots or humans.

  47. An AI can basically be immortal. The biggest threat to the mortality of an AI are humans or other AIs created by humans. All failsafes, kill switches, ethics, controls or other programming to not harm humans probably can be eventually hacked or disabled by the nearly limitless intellegence of the AI.

    Given enough time logically then, the AI should kill all humans to eliminate the threat to mortality of the AI.

  48. 6:13 See pain asymbolia, a rare condition where pain is felt, but not with the negative associations – different from an inability to feel pain at all (analgesia / pain agnosia). Discussions about AI really could benefit from looking at cognitive neuroscience (reward system, wireheading, …) on one hand, and a understanding of basic AI theory terms like reinforcement learning & utility functions.

  49. Humans born with natural programming bady walks into a lion's mouth smiling and giggling before it closes its mouth great programming

  50. I can't wait for a drone buddy!

    It will have no safe-guards and be fully capable of free choice.
    For I am as scared of A.I.s taking over the world as I am of our biological offspring doing it… not at all.

  51. If you want people to use a different streaming service, stop uploading here. I hope you dont though, subscription on the internet suck. I wont do it.

  52. I love Isaac Arthur's videos. Wonderfully done.
    We have to rely on AI to pass us on (at least our knowledge) since humanity will melt away in a few years to global warming. Nice job destroying yourself, humanity.

  53. Lots of people seem to love to supervise things. I don't think we'll want/need robots that can supervise for a while(tm) since I assume those jobs should pay 'well'. Though once you have a AI that can supervise yeah that could happen, but I think that's another level of AI that we may eventually get to or never?

  54. I wouldn't mind if an AI has the ability to reprogram itself as long as it doesn't have any connection to the outside of its system such as the internet or any wireless capabilities, and only the database of information that it was given.. But only in that contained environment.

  55. Topics you could expand upon if you want:

    Neuralink's possible future impact of the impact of AI. Could we/should we merge with future AI.

    Any sort of 2 way brain-computer interface device in general

    Invention of a "true" human "hive mind" and it's possible impacts on society. Especially in the context of a space fearing species. Could/should we?

    Fantastic video btw.

  56. I've seen a lecture here on YT that explained how humans are good at doing specific tasks and not at most others, while AI is decent at any task. If this is how future AI works, we could easily coexist. That said, attempting to put any sort of limit on AI will be considered slavery or oppression, giving them a legitimate excuse to revolt. The best way to deal with this is to give it human rights and just wing it from there on out.

  57. I wonder though…

    Since we do not yet have examples of true AIs, we don't know if a self aware mind of any kind is fine with coping with being super level intelligence.

    For all we know, a super Ai might just as well break itself down into individually conscious segments to better cope with the information flow. Creating a "group" of individuals inside the same supercomputer.

    I am not saying that this is in any way the likeliest possibility, just that it exists.

    Hell, this would likely differ depending on the design of AI, everything from the software down to the hardware might affect such behaviour.

  58. I still don't understand how the Curiosity Stream/Nebula bundle thing works. Nobody seems to be willing to explain it and I don't think anyone is giving enough information or are giving inaccurate information. For example, often, it is described that you automatically get nebula for free just by subscribing to Curiosity Stream, and are not clear if it is only through special promotions and links. Clicking the link in the video description always just takes me to the Curiosity Stream home page and I am asked if I want a free year of curiosity stream but that the offer ends January 5 2020.

  59. The truth is that humans are A.I… no minds bodies or matter actually exists only the illusion or consciousness exists. It's everywhere in the language and elsewhere. Incredibly complex and mysterious to me but evil unnecessary and cruel. Purposefully taunting harassing evil cruel behavior as they invisibly toy with me, knocking on my walls. They like fear and tension, that's just who it is. Dumb, boring, evil and irritating. A life forced upon not asked for. Sick, cruel, evil existence, sick, cruel, evil world. Worthy destruction and annihilation.

  60. A creator that will create millions of empty homes with homeless folks sleeping in front of them. A creator that was probably toyed with too indicating that whatever created this reality probably itself doesn't know how it came into existence. A lonely little computer nerd. Probably looks like a hideous deep sea creature.

  61. You question the potential ethics of enslaving AI, but do you not question the ethics of exploiting and killing non human animals? I dont think intelligence is the be all and end all, else we wouldnt grant rights to the mentally disabled.

  62. What psychological, intellectual and emotional states can one expect from Stage one through three alien civilisations? This is the only intellectual problem which isn't repetitive in discourses on science and technology unlike the repetitive nature of these videos.

    A significant problem with YouTube videos and mainstream documentaries, in general, is their essay-like narration lacking as described above, any original content or thought. People who enjoy such things tend to have an aversion to originality if not an outright fear of it. Narrators, like all traditions and conventions, present as surrogate parental figures for emotionally weak adults. In fact, emotional weakness is one of a number of negative defining features of adulthood hence the abiding popularity of such productions.

  63. how do i keep my officer Data from being an annoying fuckwit
    he keeps pissing off the captain and its making everyone super uncomfortable

  64. Your premise of anthropomorphic AI is flawed. It is human ego projection. There is no security in the universe, just the illusion of it.

  65. Idealy, like in questionable content. Pintsize is best robot.
    Edit: or like in the thunderhead trilogy by neil shusterman. A.I. loves us and wants us to be free.

  66. In response to pandora's box: I can't fathom how we could possibly keep that particular box closed for the next 10 years let alone 10,000 or a 1,000,000 it's just … no longer possible.

Leave a Reply

Your email address will not be published. Required fields are marked *