At one point in time (especially when I was a teenager), I truly believed that all knowledge was good, and that it made sense to simply discover it and spread it as far and wide as possible. I believed that tools were neutral and it was up to parents and society to teach and implement the ethical use of such tools.

But that was many years ago. Now, I no longer believe this to be true. Such an ethical statement actually completely removes responsibility from the person inventing or discovering the knowledge.

There is another flaw in the statement that knowledge is good, or at least neutral. It is the assumption that humans have the free capacity to function in the way they choose. That is certainly false also. Although a case can be made for free will, it is certainly true that any set of decisions is highly constrained. In other words, I might be able to choose to work as a doctor or as an engineer, but I can only work in the careers that are available if I am to survive. Hence, we can conclude that humans at all levels, both individual and societal, can only make so many choices.

What this means is that knowledge and tool use is not up to individuals at all, because most societies are structured so that humans have to use certain tools regardless of whether they want to or not, and that is certainly not a firm foundation for making an ethical choice of whether to use such tools.

Let’s take an example: computers. Imagine a society without computers (in fact, many readers probably experienced such a society). Now, if we introduce the computer, it is practically inevitable that people will start using it, because one person using it gives them an enormous advantage over people that don’t use it. Simply through the prisoner’s dilemma situation, everyone is forced to use it, regardless of the consequences.

Therefore, I firmly believe that not all knowledge is good, and that it is up to the individual to decide whether to release their own discoveries to the public. Of course, society still bears the responsibility to decide collectively on the danger of certain knowledge and tools, such as dangerous weapons, but we cannot ignore the responsibility of the individual when discovering knowledge.

Such an ethical decision must go beyond merely considering laws and the typical, shallow ethical guidelines of academics. Rather, a person must consider the societal impacts that may range hundreds of years into the future. Many people would give a glib reply that we can hardly plan ten years in the future, so what is the use of considering one hundred?

Well, we can easily. For example, let’s say I discover a fountain of youth drug that extends all lifespans indefinitely, or even by 50 years. In that case, by releasing it, I would become extremely rich. However, I am certain I would not release it. Of course, I wouldn’t even begin such research in the first place, but if I did happen to find it for some reason, I would try and destroy all the research that could lead to the synthesis of such a drug.

Why would I do that? Simply because it would worsen the situation of overpopulation considerably. But not just that. It would also give an incentive to absurdly rich millionaires to accumulate even more wealth and empire, because the urge for financial growth and power in them is insatiable, regardless of what happens to the planet. (Of course, I am talking averages. I am aware that there are very nice millionares also who have done much to save the planet, but they are in the minority.)

Therefore, I can make a general statement: not all knowledge is good or even neutral. Some knowledge just should not be known. And moreover, each individual has the ethical responsibility to decide whether knowledge should be known.

Other types of knowledge might not be bad, but releasing it should come with an ethical warning, just as cigarettes come with a cancer warning. For example, computer science should be taught with an ethical warning about the dangers of AI like ChatGPT. And some computer science should not be researched at all. Mathematics should come with a similar warning.

To give a realistic example, as many of you know, I am a mathematician and I have a PhD in mathematics. I have done research in the past and thankfully almost all of it is not practical, at least to my eye. Also, I don’t think much of it will ever become practical, unlike what happened with prime numbers. (I say this for several reasons, but two of them are (a) we are already reaching the level of diminishing returns when it comes to the practical application of pure mathematics, and (b) my research wasn’t unusually groundbreaking in the first place.)

However, let’s say that during the course of my research, I discovered some practical algorithm that could be used in machine learning. At this point, I would certainly never publish it. It is true that I no longer have mathematical research as part of my job description, and that actually gives me a lot of ethical relief because very few people share my concerns and would not be sympathetic towards them.

In closing, I would implore everyone who has read this far to seriously consider all of your inventions, whether they be computer-based or not. Ask yourself, could this have any negative impact to the planet earth? If so, consider not releasing it.

Leave a Reply

Your email address will not be published. Required fields are marked *