IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Tue, 4th Jul 2023
FYI, this story is more than a year old

Lately, the conversation surrounding the need to regulate Artificial Intelligence (AI) has reached a fever pitch. The question seems straightforward enough, but the potential implications are anything but. Let's take a deep dive into this multifaceted issue.

Various arguments for AI regulation include the weaponization of disinformation through technologies like deep fakes, the invasion of privacy, misuse of private information, copyright infringement with generative AI, and the potential creation of autonomous killing machines. On top of these concerns, the displacement of jobs and the threat of AI "going rogue" and destroying humanity are often mentioned.

In my upcoming novel, The Planet Fortht: Enslaving Freedom, I created a utopian world where AI and other advanced technologies safeguard individuals, uphold justice, and deter crime. Nobody is above the law, and justice is administered to anyone who causes harm to another human being. 

While the publishing process is slow, I feel compelled to speak up and share my thesis on the topic of AI regulation in this blog post.

I addressed the topic of job displacement by AI in my thesis, demonstrating how AI can usher us towards full employment. For this to become reality, I argue that AI needs the freedom to develop. I'll delve deeper into this as we go along.

The concept of AI "going rogue" has been a mainstay of science fiction and alarmist theories. For this scenario to materialize, an Artificial General Intelligence (AGI) engine would need to be developed and then unleashed without human oversight. This hypothetical engine would need to be the only AI in existence at that time, or a far superior one, capable of outmaneuvering all other AI technologies. This notion is not just apocalyptic but seems far removed from our current reality.

The idea of an AGI engine being more fantasy than fact aligns with the views of philosopher Hubert Dreyfus. For an in-depth understanding, I recommend checking out Dreyfus's extensive work on artificial intelligence, which remains relevant despite advancements in AI.

But even if AGI was not a relic of the romantic dreams of technologists, let's consider this. People who work on AI aim to create value. Unleashing uncontrollable AI onto the world doesn't serve this purpose. Even in a hypothetical scenario where AI is weaponized, it's reasonable to assume safeguards would be put in place to protect friendly populations and their own military.

I've been working in the software development industry for over three decades. I have never seen a software developer creating software they can't control. They always create a "back door" of some kind, fundamentally for testing and development, if there is no other need. 

Software developers are like all human beings, imperfect, and we make mistakes. We create software with bugs, and we know it all too well. Some examples of costly bugs include the "Knight Capital on Wallstreet" and the Y2K bug. 

The foundational block of architecture in each software is the ability to start and stop it and access data that affects software behaviour. When we build software, we go through thousands of iterations of starting and stopping it and debugging the results. We are, therefore, unable to build software without the ability to control it.

It is, therefore, beyond any doubt if you can't control it, you can't create it because you can't test it and iterate on the mistakes you will inevitably make as a software developer. Creating software, you can't control is impossible. 

But the critics would say we can't predict what AI will respond with, and therefore can't control it. What they are describing is not control. What they are describing is the lack of predictability of software output. All software has an element of that. A basic example is random number generators. They are specifically designed to give unpredictable output. A more complex example is the weather forecasting model. Humans don't directly comprehend the vast amount of data processed by weather forecasting models, which is why we rely on software to make predictions. We don't know what the prediction will be, but we are in control of how that software functions. Stock market analysis tools are another example. Financial analysts use complex algorithms and models to predict stock market movements. They have control over the parameters and data they input, but the stock market is inherently unpredictable. This serves as an example of having control over the process but not necessarily the outcomes, similar to how you can control AI parameters but not always predict its responses. In fact, all algorithms used for simulating various environments produce outputs that are unpredictable to humans.

We've been using such software for years, and there is nothing wrong with creating software with unpredictable output. The point I am making is that we are fully in control of the software through control of the underlying code. 

Some say if we can't control the output, we can't control the "behaviour". Using the term "behaviour" is very misleading. When applied to a human or an animal, the term "behaviour" includes the creatures' initiative that arises from instincts and feelings. It assumes that creature is fully in control of their actions, body and mind. This is what people often assume when the term "behaviour" is applied to a machine. But those machines have neither instincts and feelings nor their own initiative. The initiative is given to them by humans. And humans are in control, as I demonstrated by explaining how software development is even possible.

If, based on my above arguments, AI (or AGI) going rogue is impossible, why are we debating the regulation of something that doesn't exist and will never exist? Instead, let's shift our focus towards tangible threats.

Disinformation, privacy invasion, copyright infringement, and autonomous killing machines are valid concerns. They are also very scary. But we need to see those tools for what they are: weapons that are built to harm others. 

Those employing AI as weapons to harm others should be held accountable. The need for a legal framework to deter such misuse and ensure justice is paramount. For street thugs and corporations, this needs to transpire as a punishment. Those who harm other people must be held to account by society. Along with any weapons, the invasion of privacy and misuse of people's private information can cause harm. Ditto for copyright infringement. Any personal or company property that has been stolen needs to be returned, and the thief needs to be brought to justice. This is the only place where lawmakers' job is important. 

However, we also need to consider those beyond the reach of the law, such as rogue states or the international criminal underworld. The only way to combat those is to equip both governments and regular people with the same technology so they can protect themselves.

This brings us to a crucial point - much of the AI technology currently in use is based on open-source software. It's free to use and modify, making it a hotbed for innovation and collaboration. 

This also means other than large data models, the actual tech right now is not owned by anyone. And it can not be owned by anyone unless it is regulated. This, in turn, is the very thing that gives us the safety net - the only thing that gives us the safety net. Right now, we can protect ourselves from both the criminal underworld and the rogue state.

Leaked internal Google document already admits that Open Source AI will outcompete Google and OpenAI.

This open nature will change if any regulations are enforced. This is the perfect way to disarm all of us in the face of real threats.

Corporations want to regulate AI to protect their monopoly. Regulation will push AI out of the hands of the open-source community and give monopoly power to large corporations and rogue states.

Imagine living in a world where the only people controlling AI are people like Elon Musk, Vladimir Putin and Kim Jong Un. Now imagine those are the people who can use AI to weaponise disinformation and create autonomous killing machines. Actually, you don't have to imagine - they already have the basics and the foundation to do this. Regulation will thwart the progress of AI innovation in the Western world and give the upper hand to our adversaries. It will also equip monopoly holders to extract super profits at everyday persons' expense. 

To that extent, regulation like licensing or patenting AI will have a disastrous effect on us. The regulation will disarm us, regular people, from being able to both earn a living and protect ourselves.

Regulatory restrictions on AI development seem ill-advised. However, what we do need is legislation to ensure a human is always liable for an AI's actions. Accountability must lie with a human. And if someone harms another human being, be it negligence, a rogue actor, a corporate entity, or even a state. If harm occurs, someone must be held responsible.

What we should vehemently reject is the introduction of "licensing", "patenting", or any other compliance regulation that increases the barriers to entry for entrepreneurs in the AI market, consequently leading to monopolies over this technology.

While regulation seems like a quick fix, it will lead to an unwanted concentration of power and stifle the democratization of AI.

By harnessing the power of accessible and open-source AI technology, we can pave the way for a safer and more prosperous society, as depicted in the utopian world of "The Planet Fortht"