Artificial Intelligence and the Non-Aggression Principle

There have been discussions about artificial intelligence for years now. The subject has been a source of wonder and a useful, entertaining topic for a wide range of science fiction films and books. Now though, with each day, each technological step, some claim we’re getting closer to actually breaking the threshold of giving birth to AI. In a recent article in The Sun, Elon Musk said he believed AI could destroy the human race. That’s nice. Though I don’t agree with many of Musk’s points in the article, the statement gives us some bitter food for thought. He seems to think the ‘end’ would be a result of humans no longer having jobs and therefor no longer feeling any ‘meaning’ to existence. Other scientists and techies in the article say AI could be right on our doorstep and may very well become a reality in as little as two decades. I think, if Musk’s conclusion that AI could end us is correct, it’ll be due to the new beings having a lack of emotional understanding or any sense of empathy.

In essence, the new, artificially intelligent beings won’t likely adhere to the Non-Aggression Principle (the NAP). For those who don’t know, the NAP is the cornerstone, and indeed the moral and ethical code, for many libertarians. It essentially says aggression, or the initiation of force, is bad and wrong. It then follows that no one should aggress against another through harming their body, destroying or stealing their property or committing fraud (which is basically the same as stealing). Of course self-defense is OK since that’s a natural and necessary response to others’ initiated aggression.

In another article on from December of 2014, Stephen Hawking also said he believed AI could end mankind. To paraphrase, he noted we, humanity, are limited by slow biological evolution which puts us at a disadvantage compared with the potentially, fast as light, adjustments and evolutions of artificial intelligence. I believe it was precisely those slow adjustments and the gradual growth of our brains that led us to respect others and their property which is, again, core to the NAP. Although the NAP may sound like common sense, how did it become common? Not overnight. No, it became common sense through understanding from slow shifts in our consciousness. The principle is a belief, a philosophy a lifestyle preference. It didn’t spring up out of nowhere.

Hawking also said “It [AI] would take off on its own and re-design itself at an ever increasing rate.” Once the genie is out of the bottle, so to speak, our total control over an intelligent, brilliant entity will eventually vanish. Of course, when dealing with a being as intelligent as or more so than humans, we shouldn’t seek to have control or ownership of it because that would then be a violation of the NAP. But would the AI feel the same about us? I think there’s a good chance they wouldn’t. It seems to me, the AI entity or beings would more than likely be statists who wouldn’t abide by the NAP.

I’ve given a brief explanation of the non-aggression principle but what about the State?

As Rothbard noted in “Anatomy of the State” when defining what the parasitic State is, he said, “The State provides a legal, orderly, systematic channel for the predation of private property; it renders certain, secure, and relatively “peaceful” the lifeline of the parasitic caste in society. Since production must always precede predation, the free market is anterior to the State. The State has never been created by a “social contract”; it has always been born in conquest and exploitation.”

So how does this relate to AI? AI would more than likely be much more logical minded than humans. Their emotional aptitude could either be completely void or as vastly different from humanity as that of reptiles, insects or fish. Higher intelligence doesn’t necessarily create emotional feelings or responses. Dogs are emotional creatures, sometimes even more loving and forgiving than humans. The doorbell rings and my emotionally fiery schnauzer is instantly ready to rip the potential invader’s head off – or at least bluff that she’ll do so. Yet I don’t see her or other canines inventing doggy vehicles or anything else for that matter. Intelligence isn’t always clearly defined by scientists or psychologists. Most simply see it as the ability to think for oneself (“I think therefor I am” to paraphrase Descartes), or as an ability to acquire and apply knowledge and skills with varying levels of ‘cleverness.’ Unfortunately, being intelligent doesn’t mean one will have a strong grasp or display of emotions. In fact, logic and emotions can be seen as opposites. Think of Spock and McCoy going at it in round after round of various arguments in Star Trek. Our favorite pointy-eared alien represented the logical, intelligent side and Bones, the lovably, crotchety Doctor defended the emotional and moral side. They were at odds. We can also, more darkly, think of psychopaths, who could very well be the norm in any independent artificially intelligent individual or community. It’s been noted psychopaths lack any semblance of empathy. Their emotional caring is severely lacking. With the possibility or likelihood AI could very well end up psychopathic, it seems possible its individual or collective mind would mirror that of the State as defined by Rothbard above.

We, non-psychopathic, humans are emotional beings with varying levels of empathy and respect for others’ property. That being said, I fully respect logic, of course. Arguments in any debate should be grounded in logic as much as possible in order to create sound points that support one’s position. Going further, though, I’m not a Vulcan, I’m a human. I don’t live fully by logic. My, as with most other libertarians, support and adherence to the non-aggression principle is essentially an emotional lifestyle choice. I believe it’s the most moral, ethical guide for myself and others to lead prosperous and respectful lives. I doubt a psychopath would follow similar reasoning and I’m not sure any artificially intelligent being would either. Since the AI individual, whose lightning fast thought process would likely be either completely or heavily grounded in logic (especially at first), would probably not choose to abide by the morals and ethics of the NAP then the opposite would be true. AI ‘people’ would then either follow and support an existing State or would create a new one of its own.

Sure, I suppose it’s possible any political system either run or supported by AI could be voluntary and free of force but that seems unlikely. One may think an AI with an emphasis either completely or even partially on logic would have no need for exploitation or conquest. I don’t believe that’s true. Let’s look each scenario:

Say we have AI individuals who somehow hold absolutely no emotion. First, from our vantage point, we’d have to wonder what would even be the point of existing or going on. There’d be no wonder, no excitement, no curiosity, no hope, no love, no fear, no anger, no joy, no drive, nothing. Secondly, for a being with no emotion, it wouldn’t be possible to care whether another individual, or an entire species, be it AI, human, dog or whatever, live or dies. The emotionless individual would care as much as a toaster cares when and if it burns toast. It follows then, that zero emotional beings wouldn’t follow the NAP. How could they? Our choice to not aggress against others, and that to do so is wrong, is a moral one. Morals stem from our feelings and beliefs.

Ok, now we have AI that has emotion and isn’t absolutely guided by logic and logic alone. Wouldn’t they, free to reach a lifestyle preference similar to NAP supporting libertarians, wish and work for a world free of tyranny or force? I sincerely hope so, we liberty lovers can use all the help we can get, but for our future artificially intelligent neighbors, it doesn’t seem likely. I say this because our brains are wired, have been deeply seeded, ingrained, slow cooked with the ability to enjoy and abide by good morals and feelings. Yes, yes, there are bad people and there are always exceptions but the fact remains that we’re family based beings. Our brains actually release endorphins when we’re around our friends and loved ones for crying out loud. Our very bodies encourage us to support and appreciate others. It’s these feelings that help us learn and understand feelings of right and wrong. It’s through these feelings we can even imagine or understand the ‘rightness’ of the NAP. We actually feel it. It’s become common sense. With those speed of light minds Hawking warns us about, flying and evolving fast and furious without the connections and encouragement our brains have developed over time, why the hell should they care about other’s property rights? Wouldn’t their limited logic lean more towards something akin to the Statist principles of “the ends justify the means” or “do what needs to be done for the common good?” Or who’s to say their behaviors, attitudes or actions wouldn’t align more with a reptilian or insect, hive-minded brain? Yea, one could say the AI could be programmed to be more like us but once they’re intelligent and ‘alive,’ free and clear, free to evolve, potentially any which way the wind blows, who the heck knows what the end result will be? Again, I have a hunch their moral code won’t match Murray Rothbard’s or Walter Block’s or Lew Rockwell’s or Ron Paul’s or my own.

Science Fiction has always had a knack for giving us interesting and speculative glimpses into what the technology of tomorrow holds. Of course, AI is no exception. Not by a long shot. Look at all the AI in TV, print and film; Skynet and its nightmarish, metallic monsters, the Terminators, Ridley Scott’s Ash from the movie Alien and then David from Prometheus, Data from Star Trek (he was a good guy but his ‘twin’ brother, Lore… not so much), Bishop from Aliens wound up being a hero and who could forget HAL 9000 from 2001 A Space Odyssey? Most notable of all in the science fiction realm of AI was the work of Isaac Asimov. His three rules or laws for robotics and by extension, AI, was a code that would be imprinted into the machine’s consciousness which would, in many regards, make them very close to having a libertarian mindset. The laws, to sum up, say; 1) A robot may not hurt a human or allow a human to get hurt, 2) A robot must obey orders given to it by humans unless they break the first law, and 3) A robot must protect itself as long as it doesn’t conflict with laws 1 or 2. Sounds good, however, Asimov wrote these rules decades ago, back before we held even a fraction of the understanding of computers and technology we hold now. Unfortunately, again, it’s likely modern brilliant minds like Professor Hawking hold a better vision and therefor produce a more accurate conclusion than Asimov. With an intelligent brain able to think independently and be as quick and adaptable as what Hawking believes, it only seems inevitable that the intelligence will find a way to override any such limitations. In all probability, the adjustment could come very quick. From there, it would be possible for a new code of rules and ethics, ones that probably won’t hold the NAP as a high priority, to be written, spread and downloaded in the blink of robotic eye.

In conclusion it’s true to note this article isn’t anti-technology. Advancements are probably the only thing that’ll save our species in the long, long run (that marvelous Sun of ours isn’t going to last forever). New gadgets and gizmos are cool and they show how magnificent our brains, our drives and our dreams can be. Still, some things have been created which humanity would probably be better off without. Nuclear and chemical weapons, torture techniques, reality TV and so on. Artificial Intelligence… well the final verdict is, of course, yet to come, however, it’s likely those minds won’t embrace the non-aggression principle for the reasons mentioned and as a result, they’ll likely be Statist. Do we really need more non-NAP abiding, quickly calculating and cunning Statists in our world? No. Will concerns like Musk’s, Hawking’s or my own end up true? Only the future knows.


Anarchism and Rand

Us libertarians, anarchists, dare I say, radicals, continue on to show the world the UNnecessary evils of the State.

Here’s an interesting article I came across today which coincides with several of my recent podcasts. It’s a letter from an anarcho-capitalist that critiques Ayn Rand who was, in this regard, a minarchist. Good read! Here’s an excerpt and then the entire article from 1969 by Roy A. Childs, Jr:

“And there is the major issue of the destructiveness of the state itself. No one can evade the fact that, historically, the state is a blood-thirsty monster, which has been responsible for more violence, bloodshed and hatred than any other institution known to man. Your approach to the matter is not yet radical, not yet fundamental: it is the existence of the state itself which must be challenged by the new radicals. It must be understood that the state is an unnecessary evil, that it regularly initiates force, and in fact attempts to gain what must rationally be called a monopoly of crime in a given territory. Hence, government is little more, and has never been more, than a gang of professional criminals. If, then, government has been the most tangible cause of most of man’s inhumanity to man, let us, as Morris Tannehill has said, “identify it for what it is instead of attempting to clean it up, thus helping the statists to keep it by preventing the idea that government is inherently evil from becoming known…. The ‘sacred cow’ regard for government (which most people have) must be broken! That instrument of sophisticated savagery has no redeeming qualities. The free market does; let’s redeem it by identifying its greatest enemy – the idea of government (and its ramifications).” 


A Libertarian on the Jury?

A month ago, I put in a request to take a week off, Monday 11/9 through Friday 11/13, from work. My time was approved, no sweat. Last weekend, on Halloween no less, I received a jury summons. At first I thought it was cool, I don’t mind being a juror. Then I saw the summons date… Monday 11/9. Son of a… The first day of my vacation. I planned that time to work on writing, wrap things up and continue on with the last act of my third book. Now, I know, I could very well be dismissed the night before or I could get in there and only spend a few hours before being released. That would be fine. But, upon further thought, it was also be fine if I did get picked. So what, I’ll miss out on some time to write and sleep in. Being a juror is worth it. Who knows, I could very well be the only one to help keep a person’s life from being trifled upon or ruined by the state. Good!

I don’t claim to be a professor of libertarianism. If you want to read and hear from the scholars, seek out people like Murray N. Rothbard, Ludwig Von Mises, Lew Rockwell and Tom Woods. Even though I’ve only begun to grasp an understanding of libertarianism over the past five years or so, I’ve come to realize that I’ve been one my whole life. I’ve always followed the non aggression principle, always disliked government and it’s perpetual and constant use and threat of force to control my life and always believed any one of us should be free to do whatever we want, say whatever we want, ingest whatever we want and so on as long as no one is harmed or have their personal/property rights violated.

How does this tie into my thoughts on being a juror? Completely. Libertarianism is, at its core, a philosophy, a way of thinking and living. It’s theories and practices cover both economic and social issues. It holds that personal accountability, in each and every one of us as adults, is essential for freedom and prosperity in the free market and humanity as a whole. There are no entitlements and we, as masters of our own destinies, should never demand or perhaps even expect anyone’s help. The state is the opposite of this. The state flourishes as an outside entity that rules with the threat of violence often in spite of how accountable the individual may be. A man chooses to take in a drug the state deems illegal and he is punished even if, as is most often the case, no harm was done to anyone else and no property was damaged. A woman believes the fruits of her labors, often in the form of an income, is her’s in its entirety. Oh boy… does the state ever disagree! Money, which I often think of as simply a numeric representation of energy, is what the state loves and worships the most because it, equal to fear and ignorance, is what feeds its power. So, the woman doesn’t want the government to take / rob / steal her money, money which she invested her time and energy towards to earn it. The IRS, i.e. the government, uses its force to damage, inconvenience, ruin or destroy her life. This is essentially the definition of a serf. Look up how the income tax came to be. It should serve some enlightenment, I hope. So, my libertarian philosophy is that where there is no victim (a victim being defined as someone who was attacked and harmed either physically or through damage to his or her property) there is no crime.

Now imagine I do get summoned then put on a jury. Do you think I’d ever declare a drug user guilty? An income tax cheat? Most likely not. This is due to the fundamental essence of what a juror is: An officer of the court who serves as the most powerful and last line in the sand who determines not only the facts on whether or not the accused broke a law but also whether or not the law is even just or fair at all. Yes, jurors are supposed to take everything into account, state and federal laws be damned! No Victim, No Crime. I’ll stick with the non aggression principle where violence is justified in defense of one’s self and one’s property (home, land, family, children and so on). Dismissing laws the juror disagrees with and judging therein has been labeled Jury Nullification. Look it up for more information and examples. It gives we the people, those who serve as jurors, the true power in determining what is just and what is criminal.

There. Despite the potential mild inconvenience to my vacation, I look forward to serving as juror. I’ll more than likely get weeded out by the prosecutor during questioning. After all, why would any officer of the state want a libertarian in the jury on a trial they’re working on? Who knows, though. Perhaps I’ll get through. Perhaps the accused will be someone, one of us, our fellow citizens, worthy of life, liberty and the pursuit of happiness, who’s being attacked by the state while causing no harm and simply being accountable to him or herself. Perhaps, given that situation, I’ll be able to play my part in helping someone keep from interference, tampering or utter destruction by the state.