Sands In Time Episode 12: Socialism is Slavery

The title sums up the episode. Here, Joshua gives his reasoning as to why he sees Socialism and other Statists governing models as varying unfree, forms of slavery for the governed. He looks at examples from both our world and his.

Show Notes:

Subscribe to the show for free on iTunes and/or Stitcher

Aswain Books Novels Page  Thanks for your support!

Aswain Books Contact Page . Let me know what you think

Also on YouTube

My Facebook Author Page . Please like and share with your friends to stay informed on updates and giveaways

Twitter – @The AdamJAustin

Music Credit: “Med Cezir” by Hayvanlar Alemi (www.hayvanlaralemi.org)

Artificial Intelligence and the Non-Aggression Principle

There have been discussions about artificial intelligence for years now. The subject has been a source of wonder and a useful, entertaining topic for a wide range of science fiction films and books. Now though, with each day, each technological step, some claim we’re getting closer to actually breaking the threshold of giving birth to AI. In a recent article in The Sun, Elon Musk said he believed AI could destroy the human race. That’s nice. Though I don’t agree with many of Musk’s points in the article, the statement gives us some bitter food for thought. He seems to think the ‘end’ would be a result of humans no longer having jobs and therefor no longer feeling any ‘meaning’ to existence. Other scientists and techies in the article say AI could be right on our doorstep and may very well become a reality in as little as two decades. I think, if Musk’s conclusion that AI could end us is correct, it’ll be due to the new beings having a lack of emotional understanding or any sense of empathy.

In essence, the new, artificially intelligent beings won’t likely adhere to the Non-Aggression Principle (the NAP). For those who don’t know, the NAP is the cornerstone, and indeed the moral and ethical code, for many libertarians. It essentially says aggression, or the initiation of force, is bad and wrong. It then follows that no one should aggress against another through harming their body, destroying or stealing their property or committing fraud (which is basically the same as stealing). Of course self-defense is OK since that’s a natural and necessary response to others’ initiated aggression.

In another article on BBC.com from December of 2014, Stephen Hawking also said he believed AI could end mankind. To paraphrase, he noted we, humanity, are limited by slow biological evolution which puts us at a disadvantage compared with the potentially, fast as light, adjustments and evolutions of artificial intelligence. I believe it was precisely those slow adjustments and the gradual growth of our brains that led us to respect others and their property which is, again, core to the NAP. Although the NAP may sound like common sense, how did it become common? Not overnight. No, it became common sense through understanding from slow shifts in our consciousness. The principle is a belief, a philosophy a lifestyle preference. It didn’t spring up out of nowhere.

Hawking also said “It [AI] would take off on its own and re-design itself at an ever increasing rate.” Once the genie is out of the bottle, so to speak, our total control over an intelligent, brilliant entity will eventually vanish. Of course, when dealing with a being as intelligent as or more so than humans, we shouldn’t seek to have control or ownership of it because that would then be a violation of the NAP. But would the AI feel the same about us? I think there’s a good chance they wouldn’t. It seems to me, the AI entity or beings would more than likely be statists who wouldn’t abide by the NAP.

I’ve given a brief explanation of the non-aggression principle but what about the State?

As Rothbard noted in “Anatomy of the State” when defining what the parasitic State is, he said, “The State provides a legal, orderly, systematic channel for the predation of private property; it renders certain, secure, and relatively “peaceful” the lifeline of the parasitic caste in society. Since production must always precede predation, the free market is anterior to the State. The State has never been created by a “social contract”; it has always been born in conquest and exploitation.”

So how does this relate to AI? AI would more than likely be much more logical minded than humans. Their emotional aptitude could either be completely void or as vastly different from humanity as that of reptiles, insects or fish. Higher intelligence doesn’t necessarily create emotional feelings or responses. Dogs are emotional creatures, sometimes even more loving and forgiving than humans. The doorbell rings and my emotionally fiery schnauzer is instantly ready to rip the potential invader’s head off – or at least bluff that she’ll do so. Yet I don’t see her or other canines inventing doggy vehicles or anything else for that matter. Intelligence isn’t always clearly defined by scientists or psychologists. Most simply see it as the ability to think for oneself (“I think therefor I am” to paraphrase Descartes), or as an ability to acquire and apply knowledge and skills with varying levels of ‘cleverness.’ Unfortunately, being intelligent doesn’t mean one will have a strong grasp or display of emotions. In fact, logic and emotions can be seen as opposites. Think of Spock and McCoy going at it in round after round of various arguments in Star Trek. Our favorite pointy-eared alien represented the logical, intelligent side and Bones, the lovably, crotchety Doctor defended the emotional and moral side. They were at odds. We can also, more darkly, think of psychopaths, who could very well be the norm in any independent artificially intelligent individual or community. It’s been noted psychopaths lack any semblance of empathy. Their emotional caring is severely lacking. With the possibility or likelihood AI could very well end up psychopathic, it seems possible its individual or collective mind would mirror that of the State as defined by Rothbard above.

We, non-psychopathic, humans are emotional beings with varying levels of empathy and respect for others’ property. That being said, I fully respect logic, of course. Arguments in any debate should be grounded in logic as much as possible in order to create sound points that support one’s position. Going further, though, I’m not a Vulcan, I’m a human. I don’t live fully by logic. My, as with most other libertarians, support and adherence to the non-aggression principle is essentially an emotional lifestyle choice. I believe it’s the most moral, ethical guide for myself and others to lead prosperous and respectful lives. I doubt a psychopath would follow similar reasoning and I’m not sure any artificially intelligent being would either. Since the AI individual, whose lightning fast thought process would likely be either completely or heavily grounded in logic (especially at first), would probably not choose to abide by the morals and ethics of the NAP then the opposite would be true. AI ‘people’ would then either follow and support an existing State or would create a new one of its own.

Sure, I suppose it’s possible any political system either run or supported by AI could be voluntary and free of force but that seems unlikely. One may think an AI with an emphasis either completely or even partially on logic would have no need for exploitation or conquest. I don’t believe that’s true. Let’s look each scenario:

Say we have AI individuals who somehow hold absolutely no emotion. First, from our vantage point, we’d have to wonder what would even be the point of existing or going on. There’d be no wonder, no excitement, no curiosity, no hope, no love, no fear, no anger, no joy, no drive, nothing. Secondly, for a being with no emotion, it wouldn’t be possible to care whether another individual, or an entire species, be it AI, human, dog or whatever, live or dies. The emotionless individual would care as much as a toaster cares when and if it burns toast. It follows then, that zero emotional beings wouldn’t follow the NAP. How could they? Our choice to not aggress against others, and that to do so is wrong, is a moral one. Morals stem from our feelings and beliefs.

Ok, now we have AI that has emotion and isn’t absolutely guided by logic and logic alone. Wouldn’t they, free to reach a lifestyle preference similar to NAP supporting libertarians, wish and work for a world free of tyranny or force? I sincerely hope so, we liberty lovers can use all the help we can get, but for our future artificially intelligent neighbors, it doesn’t seem likely. I say this because our brains are wired, have been deeply seeded, ingrained, slow cooked with the ability to enjoy and abide by good morals and feelings. Yes, yes, there are bad people and there are always exceptions but the fact remains that we’re family based beings. Our brains actually release endorphins when we’re around our friends and loved ones for crying out loud. Our very bodies encourage us to support and appreciate others. It’s these feelings that help us learn and understand feelings of right and wrong. It’s through these feelings we can even imagine or understand the ‘rightness’ of the NAP. We actually feel it. It’s become common sense. With those speed of light minds Hawking warns us about, flying and evolving fast and furious without the connections and encouragement our brains have developed over time, why the hell should they care about other’s property rights? Wouldn’t their limited logic lean more towards something akin to the Statist principles of “the ends justify the means” or “do what needs to be done for the common good?” Or who’s to say their behaviors, attitudes or actions wouldn’t align more with a reptilian or insect, hive-minded brain? Yea, one could say the AI could be programmed to be more like us but once they’re intelligent and ‘alive,’ free and clear, free to evolve, potentially any which way the wind blows, who the heck knows what the end result will be? Again, I have a hunch their moral code won’t match Murray Rothbard’s or Walter Block’s or Lew Rockwell’s or Ron Paul’s or my own.

Science Fiction has always had a knack for giving us interesting and speculative glimpses into what the technology of tomorrow holds. Of course, AI is no exception. Not by a long shot. Look at all the AI in TV, print and film; Skynet and its nightmarish, metallic monsters, the Terminators, Ridley Scott’s Ash from the movie Alien and then David from Prometheus, Data from Star Trek (he was a good guy but his ‘twin’ brother, Lore… not so much), Bishop from Aliens wound up being a hero and who could forget HAL 9000 from 2001 A Space Odyssey? Most notable of all in the science fiction realm of AI was the work of Isaac Asimov. His three rules or laws for robotics and by extension, AI, was a code that would be imprinted into the machine’s consciousness which would, in many regards, make them very close to having a libertarian mindset. The laws, to sum up, say; 1) A robot may not hurt a human or allow a human to get hurt, 2) A robot must obey orders given to it by humans unless they break the first law, and 3) A robot must protect itself as long as it doesn’t conflict with laws 1 or 2. Sounds good, however, Asimov wrote these rules decades ago, back before we held even a fraction of the understanding of computers and technology we hold now. Unfortunately, again, it’s likely modern brilliant minds like Professor Hawking hold a better vision and therefor produce a more accurate conclusion than Asimov. With an intelligent brain able to think independently and be as quick and adaptable as what Hawking believes, it only seems inevitable that the intelligence will find a way to override any such limitations. In all probability, the adjustment could come very quick. From there, it would be possible for a new code of rules and ethics, ones that probably won’t hold the NAP as a high priority, to be written, spread and downloaded in the blink of robotic eye.

In conclusion it’s true to note this article isn’t anti-technology. Advancements are probably the only thing that’ll save our species in the long, long run (that marvelous Sun of ours isn’t going to last forever). New gadgets and gizmos are cool and they show how magnificent our brains, our drives and our dreams can be. Still, some things have been created which humanity would probably be better off without. Nuclear and chemical weapons, torture techniques, reality TV and so on. Artificial Intelligence… well the final verdict is, of course, yet to come, however, it’s likely those minds won’t embrace the non-aggression principle for the reasons mentioned and as a result, they’ll likely be Statist. Do we really need more non-NAP abiding, quickly calculating and cunning Statists in our world? No. Will concerns like Musk’s, Hawking’s or my own end up true? Only the future knows.

 

Sands In Time Episode 11: All Words Are Not Equal

In this episode, Joshua looks at some terms libertarians use which express meanings they’re both in favor of and against. He also explains how, with a larger, more powerful vocabulary and understanding of language, we think better and deeper while gaining a greater understanding of ourselves and our world.

Show Notes:

Information on Murray N. Rothbard

Information on Ludwig von Mises

Subscribe to the show for free on iTunes and/or Stitcher

Aswain Books Novels Page  Thanks for your support!

Aswain Books Contact Page . Let me know what you think

Also on YouTube

My Facebook Author Page . Please like and share with your friends to stay informed on updates and giveaways

Twitter – @The AdamJAustin

Music Credit: “Med Cezir” by Hayvanlar Alemi (www.hayvanlaralemi.org)

Sands In Time Episode 10: Backstory – August Cross

In this episode Joshua gives us some background information on August Cross. Cross was / will be the founder of Aswain and one of the key figures in the free world’s fight again the evil, corrupt, global union, Tehlasrin.

Show Notes:

Subscribe to the show for free on iTunes and/or Stitcher

Aswain Books Novels Page  Thanks for your support!

Aswain Books Contact Page . Let me know what you think

Also on YouTube

My Facebook Author Page . Please like and share with your friends to stay informed on updates and giveaways

Twitter – @The AdamJAustin

Music Credit: “Med Cezir” by Hayvanlar Alemi (www.hayvanlaralemi.org)

Sands In Time Episode 9: Hailiorea Groups and Technology

Even with all the news worthy current events going on, Joshua uses this episode to talk a bit about Hailiorea lore and history. He channels an inventor from his world named Mike Taft who gives insight and information about several of the fantastic inventions he helped create.

Show Notes:

Subscribe to the show for free on iTunes and/or Stitcher

Aswain Books Novels Page  Thanks for your support!

Aswain Books Contact Page . Let me know what you think

Also on YouTube

My Facebook Author Page . Please like and share with your friends to stay informed on updates and giveaways

Twitter – @The AdamJAustin

Music Credit: “Med Cezir” by Hayvanlar Alemi (www.hayvanlaralemi.org)