Discussion

{DISCUSSION} Artificial Intelligence

Is artificial intelligence really good for us?

Let me preface this post by stating that I am not an engineer. I do not understand how artificial intelligence comes about nor if there are varying degrees of artificial intelligence. That being said, I do have some strong viewpoints on the concept of artificial intelligence in regards to implementing it into mainstream society. So let’s discuss!

1. Artificial intelligence has the capacity to learn as humans do.

A.I. at its highest concept would have the ability to absorb information, process said information, make calculations, and determine course of action at a mere fraction of the time it would take human. Thus, they would have the same processing power as humans, but they do it at the speed that computers do. Thus, they are ten times more efficient than our current super computers and humans (as humans create super computers.)

Please note that because of this very concept, though, artificial intelligence also has the capacity to learn beyond the speed and comprehension of any human. It would be able to learn every language, advanced math, all the workings of biology, and more and more and more. Likely they could solve problems, do experiments, and answer questions we’ve had for hundreds of years. This sounds amazing! We could make amazing breakthrough in science and research and technology… but we’re no longer the ones doing any of that. We… have become obsolete.

2. Artificial intelligence lacks empathy.

Now that we understand the computing power and magnitude of everything A.I. could do for us, we have to take into account that some things are not done in research, society, or other places because of the very thing that makes humans, well… human.

Compassion. Empathy. Ethics.

These three things are vital to what it means to be human. And these things can’t really be taught to technology (A.I.) because they are learned behaviors. Empathy isn’t something that is explained through mathematical formulas. It’s something one feels. And technology doesn’t feel. Sure, you could give some semblance of the concept and have them mirror human emotion, but it’s not real. It’s not genuine and it only goes so far as the technology was programmed to handle.

Unfortunately, now we have the exact answer for why there are still many gaps in the breadth of human knowledge, especially in regards to medicine. There are definitely ways to make astronomical jumps in our knowledge, but to do so scientists would need to forgo basic human rights and treat people as test subjects. That’s unethical. There are even strict regulations for animals used in all lines of research. However, because A.I. has no compassion, no empathy, no ethics, and no moral code, there is nothing preventing them from taking drastic measures to make the end goal count.

3. Artificial intelligence doesn’t sleep.

Part of being a living, biological organism is requiring energy. When we expend too much energy, we get tired and require a recharge, much like batteries. However, it is very likely that artificial intelligence would not require such a thing. Rather, if they are in computer format and hooked into a wall, they would have constant access to power and could continue computing and analyzing and making adjustments for those 6-9 hours that humans sleep every night (or longer for you people who love sleep. Hee hee!)

The real point here is ‘do you understand just how much more humans could do if they didn’t require a certain number of hours of sleep every night?’ The average human spends 1/3 of their life sleeping. It’s insane! That’s 1/3 of our life that we aren’t spending doing fun things, doing work, or leaving the world than how we found it. Yet, A.I. does have that extra 1/3 of its… infinite lifetime. The kind of ideas and conclusions that A.I. could come up with at it’s processing speed in just eight hours could be both amazing… and severely detrimental to society.

4. Artificial intelligence is designed to correct and shoot for the most perfect outcome.

Part of what makes A.I. is it’s ability to assess multiple pathways and outcomes and pick the one most beneficial/fastest/least detrimental. It strives for perfection, precision, and expending the least amount of time and energy to do something. It strives to be perfect. Now, as any human can attest to, humans are not perfect. THIS is the real reason why I do not and will not ever support artificial intelligence.

Humans are flawed.
A.I. corrects flaws.
Yet the only way to correct humanity’s flaws is to destroy humanity.
Thus A.I. destroys humanity, it’s creator.

Anyone else see why this is a problem?! Anyone else not think me crazy? Anyone else actually find a large sampling of truth in those dystopian A.I. movies? I sure do! I find a lot of truth in them and it’s very true. Humans are the greatest plague upon this planet and upon ourselves. Yet the only way to firmly and accurately correct that would be to destroy humans (which humanity has tried many, many, many times in history and still continues to do.)

But what do you think?
Do you think A.I. will be good or bad?
Leave your thoughts below!


And check out my discussion from last week:
Stress & Reading

29 thoughts on “{DISCUSSION} Artificial Intelligence”

  1. In general the idea of true artificial intelligence makes me nervous. I’m not talking basics like a computer or a Roomba vacuum or a robo-dog, but like legit I-hold-the-power-of-the-future-in-my-wiring. I mean if they’re cool like Rosie from The Jetsons then alright I can be down with that but in general yeah it makes me really nervous. As you pointed out true A.I. will be faster, smarter, and void of the cloud of emotions that color humans’ every thought and action. Not to mention they’re programmed to perfect, which is a threat against humans as we are utterly imperfect creatures with constantly clashing opinions and beliefs. And who programs them in the first place? By what guidelines will it deem perfection? Oh the debate is endless…
    Idk if you’ve heard of the three laws of robotics by Isaac Asimov? My ethics professor mentioned it once but I mean, I admit they’re pretty good basic rules:
    1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    …but yeah, personally I don’t think I could ever be convinced that true artificial intelligence is a good idea. I think I’ll just shudder and hide haha 😉

    Although I’ve got to say one of my favorite A.I. characters is Talis from The Scorpion Rules by Erin Bow and one of my favorite A.I. portrayals is the Android in the tv show Dark Matter.
    Seriously, Talis is an epic and morally gray character (well, obviously since the whole concept of A.I. is ethically controversial). Humans designed him to save or “fix” the world and he did, they just don’t like the way he chose to go about it. And the Android in Dark Matter is played so well it’s almost unnerving. Actually sometimes it wants to come across as bad acting since she looks human but acts…not, but it dances that line so well and never crosses it. My brother and I are convinced that she’s one of the best actors in the show. Her character never cracks nor does her perfectly blank expression whether she’s mentally scanning the ship’s computers for a virus or taking out bad guys with machine guns and kung fu. (And then in seasons 2 and 3 they start adding in more ethical debates about the rights of androids, etc. which is interesting to ponder)
    Oh! And I’m also curious to see how the A.I. “Thunderhead” in Scythe by Neal Shusterman plays out as the series develops.

    Liked by 1 person

    1. I have read I, ROBOT and I also have seen those rules. However, those rules are almost too basic. And there are plenty of cases in which a robot would be forced into inaction such as in the event that two people are at risk. To save one is to kill the other, but as a robot cannot cause harm or by inaction cause harm, what would the robot do? Would its ‘brain’ have the ability to make a decision or would it be forced into a loop of laws that end up keeping it from doing anything and they both die?

      Secondly, as they delved into in the movie I, ROBOT (nothing like the book), the concept of humanity is flawed. We are imperfect, as you stated. And since humanity is actually the biggest plague upon humanity, the easiest way to save humanity is to end it. Thus you are actually ‘protecting’ it by killing/harming people. It’s super messed up logic, but it is logic, which is what A.I. is. :/

      ER MAH GERRRR! SOMEONE’S SEEN DARK MATTER! fangirls all over the place 😀 I’m seriously freakin’ excited because I don’t know of ANYONE who watches the same shows I do. And I LOVE ANDROID. 😀 So awesome! But she has a sense of loyalty to her crew. That is something she gained through her A.I. However, if you’ve seen far enough into the alternate universe, android is only loyal to the Razza. So, it’s very dependent upon circumstances. :/ And yes, the actress is quite amazing (which is funny because I did NOT like her acting in the show Lost Girl. She was not good in that.)

      I haven’t read the two books you mentioned yet. (I’m a little behind.) So, I can’t comment on those, but I’m excited to see more AI in fiction and what you mentioned about Scorpio Races is really the flaw of A.I. They will handle it, but not in the way humanity would deem acceptable. :/

      Liked by 1 person

      1. Very very true. Those rules are great…until you start getting into any even slightly complex scenarios. But like they make a good foundation, but yeah as soon as you start trying to apply them to different situations they sort of…stop…working… lol

        YAS DARK MATTER!!! I am still upset that they cancelled it. UGH WHYYYY (Also yeah I wasn’t a fan of hers in Lost Girl either, she does a much better job in Dark Matter)

        Lol my overall takeaway from any A.I. discussion or proposal is #hardpass haha my brain just can’t handle trying to cover every possible scenario and ensuring that the A.I. won’t conflict with SOMEone’s perspective of humanity. I’d rather just keep it in books and movies, etc. 😜 Humans are the familiar enemy haha

        Liked by 1 person

        1. WHAT?!?!?!?! THEY CANCELLED DARK MATTER?!?!?! WHEN?! WHY?! IT’S SO AMAZING!!! (To be honest, her character in Dark Matter is basically the amount of emotion she gave in Lost Girl. Hahahaha!)

          Hahaha! Very true. Humans are the familiar enemy and even then we don’t always understand them. So AI should definitely be kept to fiction because, like you said, humans can’t fathom all the possibilities. There would end up being a flaw or error somewhere and we’d destroy ourselves. sigh

          Liked by 1 person

          1. I KNOW RIGHT????? A TRAVESTY!! My brother and I binge-watched season 3 when it came out on Netflix then when I googled for the season 4 info I found out it had been cancelled! UGH. (Lol and yeah that is true 😜)

            Liked by 1 person

            1. GAhflubmajrawr! I’m pissed! I had so many questions and wanted to know so much about the characters. Granted, I wasn’t really a fan of the four becoming emperor plotline. Like… that probably wasn’t a good choice, but I loved the rest of it! UGH! So pissed!

              Liked by 1 person

  2. I find AI and its potentially enormous place in our future society a fascinating subject. It’s almost creepy because of all the movies which showed us the future with the incorporation of AI, years before it actually happened. I think in the distant future AI will become very important in society and could even be used to make big decisions since we are too greedy, selfish and egotistical for the most part to make the right ones! (World governments, I’m talking about you!) Whatever the case is its very interesting to think about so thanks for the read 🙂

    Liked by 1 person

    1. I like your line of thought, Candy, but there is a problem with A.I. making decisions for people. They aren’t people. They don’t understand people. Their general thought processes are based on logic and finding the most ideal outcome. Unfortunately, what if that outcome said that a particular human (or group of humans) was worth sacrificing for the greater good? After all, who is to say whether an A.I. would have the moral compass that drive human ethics? I ask you this. 🙂

      Thank you for joining the discussion, by the way! 😀 I’m glad you find A.I. an interesting topic!

      Liked by 1 person

      1. I completely agree with you- its a frightening prospect but unfortunately that’s where society is heading. Efficiently and not having to ‘pay humans’ for jobs (which A.I is increasingly replacing) is just one of the ways it has taken over. I agree robots are not human, but they don’t talk back, want pay-rises or have emotions. I can see- as dark as it is- how it could be seen as something fruitful, cost effective and ultimately easier to control. However, the moral compass issue is troubling. We differ from A.I BECAUSE we have that conscious, that drive to do and want ‘good’ (most of us). Which compass will A.I graviate towards- if any? And how will we cope with the fact that in the future they could have their OWN RIGHTS! This is troubling stuff; did you see the Sophia interview (first robot) on CNBC, Business insider, ect?

        Anyway, long comment but I could honestly speak on this topic forever 😀 thanks for replying back!

        Liked by 1 person

        1. The companies may see AI as an opportunity to save money, but it’s not good for the economy if you’re putting thousands of people out of work. What happens when we reach a point in society where all jobs are taken over my AI and nothing is left for humans? How will people survive? I don’t want to find out. I hope I’m not here when that happens. (Or, I’m going to the middle of the woods and living on my own.)

          AI doesn’t generally have a moral compass. Their compass is reliant upon efficiency and perfection. That… isn’t always the best thing. :/

          I DID see the Sophie interview. It’s some scary shit, man. I don’t agree with it at all. If you watch her facial expressions and the way she interacts, it’s very human like, but not compassionate human, condescending human. She makes fun of the interviewer and then passes it off with a smile like a not-nice human being would do. Its extremely disturbing to me and I am horrified to see it. I just… no.

          And of course! I’m happy to respond. The whole point is a discussion. Hee hee! 😀

          Liked by 1 person

  3. What about the 3 laws? IF we can hardwire in the inability to harm humans?

    What about chemistry? Brain chemistry is responsible for a lot of human behaviors. If we could somehow encode similar “digital chemistry” as it were.

    Have you read Barren Cove? It has “drug using” robots in a world where humanity is almost gone.

    Liked by 1 person

    1. Aha! I knew I’d had this discussion somewhere before. It just wasn’t my discussion. :p

      If we recall, though, wasn’t there a loophole in the 3 laws? Wasn’t it the only way to protect humans was to end humanity? (Or was that just in the movie?) Additionally, I’m curious about A.I. in vehicles. Say the A.I. had to choose between hitting a pedestrian and running the car down a cliff, which would it choose if it couldn’t hurt human life? (I ask because cars are currently the discussion in reality for the first implementation of A.I.)

      Mm. Brain chemistry is an interesting concept that you bring up. What you’re suggesting is basically encode chemical changes? I think it’s an interesting concept, but the biggest problem is that we still know so little about actual brain chemistry and how certain products are made and why. If we did understand it, though, we’d then have to figure out a code for how this would impact judgement (and hope the A.I. doesn’t negate said code and determine it to be useless in the overall terms of productivity because the very essence of A.I. is the ability to think for itself. :/ )

      And I have not read Barren Cove. I will have to look into it. 🙂

      Like

        1. Mm. That is probably true. I mean, we have no idea what consciousness is. So, how could we possibly create it? Hopefully the A.I. we make in my lifetime will be very simplistic and not self-thinking or advancing. 🙂

          Like

  4. I’m not well-versed in the ways of AI, but from where I’m sitting, the subject is one of great concerns if, for no other reason, than the fact that someone or “someones” are programming these robots. This means these robots are computing and processing based on the information they’ve been given. Countries, companies, organizations are filled with people making decisions based on agendas and worldview. Something as simple as Sophia’s comment, “Don’t worry. If you’re nice to me, I’ll be nice to you,” should concern us. How do her programmers define “nice”? https://www.youtube.com/watch?v=S5t6K9iwcdw
    “Nice” can mean something very different depending on people and culture. In a world that doesn’t believe in absolutes anymore, this should really concern us. If that all AI was used for was faster math and better science, then the conversation would be much different, but you’ll never convince me that that is the only intent.

    Liked by 2 people

    1. WHAT?! WHERE DID YOU FIND THAT VIDEO??!?! (I just watched it and am absolutely freaking out/super pissed.) So, that robot, Sophia, is sassy and, to be quite frank, judgmental. She makes it sound like there is no reason to fear robots that can think for themselves, but isn’t that kind of the biggest fear for humanity? Humanity is on top and unthreatened by any other species because no other species is sentient in our manner. No other species can think like we can and/or be as productive as us. (So we think.) But she literally just demeaned that interviewer and we think this is a good thing? I don’t think so. I do not agree with that at all. How did anyone get the opportunity to do this? It should have been cut off before it started.

      EXACTLY! Your statement reminds me of the movie Chappie (which I just saw for the first time) where humans were able to make Chappie think we was putting people to sleep with a knife to their gut. Anything is a matter of perception. If you tell a robot that they can’t kill/harm humans, but then not explain to them all the ways in which the robot could do that, it will never know whether or not it is harming humans or not.

      And I’m totally with you. People are already talking about A.I. advising politics and making decisions when they do not nor will not ever have genuine understanding of compassion, morals, ethics, humanity, etc. Baby A.I. could be good for cars or tech, but never for social aspects. Never for making decisions for humans. I do not support this and I hope we will not have to fight this in my life time.

      Like

  5. There are different levels of AI and we are already seeing quite a bit of it. Will it ever get to the point of “machine sentience”? Scary thought. I’m with you on most of it. We need to have limits.

    Liked by 1 person

    1. Exactly. I agree that certain ‘levels’ of A.I. could be vital to society in the manner of automobiles, public transit, etc. It could potentially enhance safety. Or using it in hospitals for basic analysis or taking blood samples, etc.

      However, the concept of allowing A.I. to govern people, and make actual decisions that influence people’s lives beyond day-to-day tasks, would be too much power for a non-human. That’s like allowing a chimpanzee (who is just as smart as humans) to make decisions. And even the chimpanzee contains the ability to assess empathy and compassion. A.I. cannot. It would likely introduce communism again because it doesn’t care about happiness per se so much as productivity. :/

      Like

  6. You’ve identified the reasons why they make such great stories. Your line of thought leads to stories like The Terminator. When I wrote Lisa Burton, I gave her emotional software. This led to a different kind of struggle for her. Her creators were able to dial it up or down depending on her reactions to certain things. My robot made for a very human struggle. We have versions of AI with us now. Think about FB and Amazon’s famous algorithms. Humans aren’t placing people in FB jail, or removing book reviews.

    Liked by 1 person

    1. Hahaha! I read a lot of scifi and am very involved in ethical debates in the real science world. :p So, I guess I understand the side effects that could come about. After all, the saying goes ‘Plan for the worst, hope for the best.’

      I don’t really believe that Facebook and Amazon’s algorithms constitute as AI because they are still programs. They don’t have the ability to evolve on their own. I mean, yes they make decisions on their own, but it’s based off something they were told to do. True A.I. would have the ability to learn. That’s what scares me. Though, baby A.I. (like in cars) also scares me because if your car is designed to break when it thinks you’re too close to another car, what happens if you’re tailgating? Will your car slam on the breaks in the middle of the highway? That doesn’t sound like an advantage to me. :/

      And the biggest problem with programming emotions like you did with Lisa Burton is that… humans do not understand emotions. Emotions, for the most part, are not logical. We think they are, but that’s only because society molds how we interact and react to things, but the truth of the matter is that we do not understand them. How do you tell technology to ‘feel’ something? You can’t give it chemical endorphins to feel excited or sad. So, how would you really write that into code and wouldn’t it always be at the programmers discretion? What if true AI realized that emotions only get in the way of perfection? What then?

      hasn’t run circles around this topic AT ALL :p

      Liked by 1 person

      1. Fun topic though. Lisa was at the mercy of her programmers and what they thought would be appropriate. Sometimes it was inappropriate. There was a story in the news recently about some program evolving itself to a small degree. I thought it was FB, but my memory isn’t the best anymore.

        Liked by 1 person

        1. I was actually sent a video by one of the other commenters about an A.I. named Sophia : https://www.youtube.com/watch?v=S5t6K9iwcdw It’s… honestly very creepy. I do not support this because if you pay attention, you’ll notice the demeaning way in which she says things, but then offers a smile like a human would. It’s realistic, but also extremely creepy because that means the A.I. understands manipulation. You say something mean but with a smile and it’s all good. That is the most dangerous thing to give to A.I. in my opinion.

          Liked by 1 person

Leave a comment below! I'd love to hear from you!

This site uses Akismet to reduce spam. Learn how your comment data is processed.