Há muitas razões por que a IA (inteligência artificial) é ruim para nós. Mas, a razão principal é que a IA funciona e se espalha na sociedade por meio do mecanismo de “Dilema do prisioneiro”. Essa é a situação na téoria de jogos onde duas pessoas podem escolher entre duas opções: em nosso caso, usar IA ou não usar IA.

Ambos ficariam bem se ninguém usasse IA. Contudo, se a primeira pessoa usa IA, então ele tem uma vantagem sobre o outro. Logo, o outro usa IA também. Porém, se ambas as pessoas usassem IA, então ambos ficaram pior do que antes.

Esse efeito significa que no fim, todo mundo vai precisar usar IA para ganhar dinheiro. As artes deveriam ser sobre a espressão do espírito humano, mas a IA vai substituir a maioria dos trabalhos humanos por trabalhos artificiais.

O que isso significa? Primeiramente, vai ser difícil para os humanos fazerem artes. Vai ter tantas artes criadas por IA que as pessoas não vão ter como competir. Mesmo assim, as obras humanas são melhor do que aquelas criadas por IA, as obras humanas apenas vão perder-se no mar da internet. A IA cria coisas muito rapidamente.

Em segundo lugar, já que a IA vai produzir a maioria das coisas, as pessoas vão perder seus objetivos na vida. Claro, nós podemos fazer coisas por divertimento, mas a maioria das pessoas querem acrescentar alguma coisa para o mundo, e eles não poderam acrescentar nada se a IA fizer tudo.

Algumas pessoas têm dito que talvez a IA não vai atingir o mesmo nível dos humanos. Isso é possível, mas o que a IA pode fazer agora já é muito ruim, e já está ajundando o sistema da tecnologia a roubar nossa humanidade.

Há algo que possamos fazer para consertar esse problema? Sim. Claro, isso é difícil mas há algums passos básicos com os quais podemos começar:

  1. Trate a tecnologia com cuidado. Não deveriamos criar novas tecnologias sem saber os efeitos que elas teram.
  2. Faça coalizões que não usem IA.
  3. Reduza incentivos econômicos para desenvolver novas tecnologias, talvez usando taxas altas para empresas de IA.

Ivan Illich argues in his book ‘Energy and Equity’ that the ability to travel beyond a certain speed has caused inequality and ruin to humanity. Moreover, he proposes that the world would be better off with a speed limit closer to that of a bicycle. Cars, the primarily tools that we use to travel faster, have made forced upon us the necessity of having to spend more time traveling for necessities, and reducing mobility of the walking person at the same time.

I entire agree with Illich. Of course, the world being as it is, a simple implementation of a speed limit will not solve the problems brought upon by the transformation of the world due to our past ability for quick transportation, although I still believe such a limit might cause restructuring and improvement eventually.

What about a speed limit for technological development. We need such a speed limit because technological development is too fast now to be good for us. It is at a speed that most people cannot handle. If thousands of jobs are replaced tomorrow in the time it takes for some code to compile, then how will humanity cope with that?

Slower development would help people transition to more advanced societies, and it might also make people think twice about developing technology in the first place. For example, imagine if AI companies would be required to wait three years before releasing their creations? Perhaps we could prepare a little better for the onslaught of AI. Of course, in the case of AI, I recommend halting progress there altogether because I doubt we will ever have the maturity to handle it.

But at any rate, how can we implement speed limits for technological growth? One possibility is to require everyone to submit new technological inventions of a certain kind of an evaluation and ethics board. Although it might be difficult to specify which technologies need be submitted, as a rule, large technology companies like Google and Microsoft would be required to be under intense scrutiny for all their creations.

I don’t know of too many practical solutions to this problem, unfortunately. However, as a society, we should think very carefully about this problem. Extremely fast technological development could cause serious accidents, just like driving too fast causes car crashes. If we limit the speed of technological development, we would have a better chance of avoiding such accidents.

In 2009, George Monbiot wrote an article called “The Population Myth”. In it, he attempts to debunk the idea that overpopulation is a serious problem when it comes to climate change and pollution, and instead blames consumption on the rich of the world. Does his article have any merit? Let’s take a look.

Monbiot’s article cites many facts such as the following:

Between 1980 and 2005, for example, sub-Saharan Africa produced 18.5 percent of the world’s population growth and just 2.4 percent of the growth in CO2. North America turned out four percent of the extra people, but fourteen percent of the extra emissions.

It is undeniable that Monbiot’s facts are true. Rich people produce more pollution than poor people. For example, in 2018 the average American produced 16.1 metric tons of CO2, whereas the average Indian produced 1.9 (source). He ends his article with:

So where are the movements protesting about the stinking rich destroying our living systems? Where is the direct action against super-yachts and private jets? Where’s class war when you need it? It’s time we had the guts to name the problem. It’s not sex; it’s money. It’s not the poor; it’s the rich.

I actually agree with Monbiot in that the rich are very much helping to destroy the ecosystem. But, I vehemently disagree with him when he says that overpopulation is not a problem, because there are so few of the rich, and the poor are not to blame. So, although his facts are true, his conclusion is hopelessly false for three main reasons.

The first reason is that CO2 output is not the only environmental problem. Another major one is habitat loss, a lot of which is due to people simply taking up space, and in that regard more people means more habitat loss, period. Yes, some of the habitat loss is due to richer countries buying exports, but much of it is also due to people just taking up space. Africa may not release as much CO2 per capita, but they still have roads, habitat fragmentation, and a growing economoy.

The second reason is that technology is advancing so rapidly, that most poor people will become much greater consumers in the near future. Poverty rates are decreasing, and wealth is increasing all over the world.

The third reason is that the reason why there is so much wealth in the first place is that there are so many people. Poor people may not expel much CO2 directly, but en mass, they certainly enrich wealthy nations which in turn use all those resources to further contaminate the planet.

So, while I agree with Monbiot’s assessment that richer people and countries expel much more CO2 directly, I disagree that overpopulation is not a problem. If the world had only ten million people in it for example, there is simply no way people would be able to generate as much wealth disparity and hence pollute as much as they do. Habitat loss would be far less, and economic growth would be much less rapid.

Less people, and more precisely, less population growth, means that we would have to find a more stable society that does not depend on endless growth and the inventions of mostly useless technologies.

Of course, I do agree with George Monbiot in that the richest countries should reduce their per CO2 output. But it is also important to focus on getting education to poorer countries so that they can have less children and grow less.

Carbon capture refers to a series of technologies that are designed to remove carbon dioxide from the environment. Recently, the Biden administration allocated a billion dollars towards this goal. Is this a good thing?

Of course, CO2 levels in the atmosphere are rising like crazy. Right now the current CO2 is just over 417PPM. This will certainly have disastrous effects, so it’s a good thing to remove CO2 from the atmosphere, right?

The answer is yes, absolutely yes. If we can remove CO2 from the atmosphere without causing other damage, then that would be amazing.

But that does not mean we will be home-free. Therefore, it pays to look at carbon capture technology more critically. Indeed, if we do manage to develop carbon capture that works, then in the short-term, people will be much more encouraged to use fossil fuels. In turn, this may lessen the pressure to develop alternative energy sources.

This isn’t guaranteed, but we need to keep in mind this possibility. If the external costs of using fossil fuels goes down, people will use more of them and thus pollution levels may actually rise. It also makes our use of resources more efficient, and hence we’ll use more of them.

So while I agree carbon capture is good for us in the short-term in that it solves some of our immediate problems, don’t think that humanity will all of a sudden transform into some utopia when it works. We will just be even more dependent on technology and if it breaks, it will break very hard.

Previously, I talked a little about technological determinism. Briefly, technological determinism is the thesis that technology is propelled along by its own strong force, and although we can influence its development somewhat, it takes a lot of effort to resist its march forwards.

Determinism is a relatively minority view. Anyone who states that technology is a tool and that we can choose how to use those tools is probably not a believer in determinism. Yet, there are strong reasons to believe in determinism, because it has clear and observable mechanisms, such as the prisoner’s dilemma, and the fact that we have predictable instincts. These instincts are often ones that were adaptive in our past and hence guided by evolution, but have become maladaptive in the current state of excess of most resources that we require to live.

It is interesting to read some rebuttals towards determinism, and I recently read of one by John Michael Greer in his book “The Retro Future”. It goes without saying that I agree with much of what Greer has to say, especially when it comes to how we should proceed as a culture in the future, and how we should handle technology. However, Greer does speak about determinism:

When talking heads these days babble about technology in the singular, as a uniform, monolithic thing that progresses according to some relentless internal logic of its own, they’re spouting balderdash. In the real world, there’s no such monolith, no technology in the singular. Instead, there are technologies in the plural, clustered more or less loosely in technological suites that may or may not have any direct relation to one another.

This goes against the idea of technological determinism. Greer states that some technologies might have developed such as the technology required to build a bicycle but not so that we could build a radio receiver. He says then that

Strictly speaking, therefore, nothing requires all the different technological suites to move in lockstep. It would have been quite possible for different technological suites to have appeared in a different order than they did; it would have been just as possible for some of the suites central to our technologies today to have never gotten off the ground, while other technologies we never tried emerged instead.

Of course, Greer is absolutely right. Technology could have come about in multiple ways, but that in no way makes it inconsistent to think of technology as a “monolithic thing that progresses according to some relentless internal logic”. In exactly the same way, different animals have evolved on this earth in different forms, but they are all single entities that have their own internal logic to them.

The “internal logic” of technological determinism doesn’t mean that technology must evolve in a lockstep fashion in exactly one predetermined way. Rather, it means that there are forces such as the prisoner’s dilemma and human instincts (and more generally, many self-organizational principles) that act to push technology further. Technology can still act as a whole, even though different pieces may shift around causing different eventual outcomes.

I think ultimately, Greer and I end up in the same place. Later in the chapter, he says

[W]hen something is being lauded as the next great step forward in the glorious march of progress leading humanity to a better world someday, those who haven’t drunk themselves tipsy on industrial civilization’s folk mythology need to keep three things in mind.

I think at the beginning, when Greer is putting down the idea of technology as a monolith, he is more referring to the idea that progress is inevitable and great, which may be a conclusion of some that believe in determinism. In some ways, the technological organism that I talk about could be seen as the next phase of life and existence by some. In fact, many do. Some technophiles even dream of being put into virtual reality or having bodies augmented or even replaced by machine entities that don’t get sick.

However, one must not conflate technological determinism with the idea that technology is the next “great step” in the evolution of life, and that it will create a better world for all. Technological determinism is a manifestation of a self-organizing force just like that which created life, but it is in fact more like the evolutionary reflection of evil, to make a crude analogy.

So, I understand where Greer is coming from. Technology certainly is not a great thing and neither is progress. In fact, it is frequently detrimental and we do have influence over it. Yet, to dismiss technological determinism is also a mistake because it is a useful description and thesis of how technology actually comes to be. One must not fall trap into similar to the error of the social constructivists and believe that we can just recreate any sort of society at will by modifying society to modify the individual, or in this case, by modifying society to easily control technology.

In short, while Greer’s book contains excellent ideas on how to proceed with technology and while I agree with almost everything in it, I do believe that a dismissal of the internal logic of technological growth is also a weakness that makes it harder to comprehend the true nature of technology and how to handle it.

Technological determinism is a position in the metaphysics of technology that states that technology develops as a force in a particular direction which is outside human control, and this development determines and shapes human values.

Determinism doesn’t say that technology is unstoppable, but rather, that it forecfully pushes itself in the direction of greater advancement mostly independently of human control. So although we can influence the development of technology somewhat, it is nevertheless a strong force with great momentum that is hard to influence.

The position of technological determinism is a minority view, whereas the majority of people take the instrumentalist view: that technology is merely a tool. However, there is one major point for determinism that cannot be overlooked: the prisoner’s dilemma, as I have talked about at length.

In short, this says that any new technology will inevitably be developed and used since it gives individuals advantages over others in a wide variety of situations, and thus basic human instincts together with our social structure is a growth medium for advancing technology.

Instrumentalism, or the view that technology is just a set of tools freely created by humans, is intuitive because without human action, technology cannot develop. However, determinism does not contradict the fact that humans create technology through their own action.

Instead, determinism merely says that the large-scale effects of humans interacting together in a social system with all their instincts create a system such that the advancement of technology is inevitable, and thus technology proceeds inexorably due to the emergent phenomenon of people doing things for their own benefit.

Can determinism be overridden?

The main consequence of technological determinism is that it grows without end and is almost impossible to stop. Of course, it is not truly impossible to stop. The collapse of civilizations often slows it down or can even stop its progress. And in small groups, technology may not grow at all.

Technology is like a virus with no cure growing in your body. Can you stop it? It might be possible, but it is unlikely. A virus is not alive, but it operates with a rule that makes your cells copy it. Technology operates like that: it provides a set of strong incentives to copy it and improve it, and we respond instinctually to those incentives. Of course, any individual can respond by not copying the virus of technology, but because the incentives provided by technology are so strong, most people really cannot do anything but copy it, me included.

The cure for technology is for an overwhelming majority to choose not to copy the virus of technology. In other words, technological determinism is not absolute, but the cure for it is very difficult to find, probably much more difficult to find that trying to find a cure for some human viruses.

Technology is insidious in that people usually see the local improvements of technology. By local improvement, I mean an improvement that solves an immediate problem and does not make its entire effects known. Take smartphones: they solve local problems such as notifying people immediate if something goes wrong, and they relieve some of the stress caused by something going wrong in the first place.

Of course, the longer-term effect of smartphones is that it makes the world much more busy, and causes other sorts of problems like smartphone addiction and people having car accidents while they are driving.

Some technologies may be truly benign, or at least benign enough so that even the most enlightened societies might choose to keep them. Other technologies might have a mix of good and bad effects and we might choose to keep these technologies simply because the good outweights the bad.

My ultimate ideal for society would be that we examine each technology thoroughly and decide whether to use it only after an intensely critical look at it. In this post, I will describe eight challenges that will accompany the path to such a society.

1. Some people’s live improve a lot with technology

Some people’s lives are much better with technology, and other people might just really enjoy using technology. Of course, I also enjoy using technology like my camera.

People enjoy using technology even when teh overall effect of the technology on society is negative, simply because the negative effect does not follow from the individual’s use of said technology. Technology is interesting because it is addictive. It plays upon our instinct to gather as much information as possible and it panders to our social instincts to connect, even though the resulting connection enabled by technology is often inferior to real connection.

This effect alone will make it hard for some people to give up technology, and it will make it hard for people to want a society where that technology is not as advanced. Many people simply love using their phones, and would find them hard to give up. When AI is more thoroughly integrated with phones, the addiction will only become worse.

Moreover, the people whose lives are especially comfortable are made so by technology. Those people are in turn the people who will have the least to lose if our society collapses due to environmental degradation, which in turn is caused by technology. And finally, those people are the ones who often have the most power in our society, not only because they are often the people with the most money, but also because they are the most highly educated and are the ones who are at the forefront of technological development.

The biggest problem in societal reform will be to convince these people that endless progress is bad. Their retirement accounts are often dependent on endless economic growth, which is primarily a function of technology or if you will, a reflection of the growth of the technological organism.

2. Negative effects are long-term

Another challenge with technology is that the negative effects of technology are not immediately visible. Humans just were not programmed to take into account consequences such as the complete ecological collapse of the planet. We evolved at a time where humans could not destroy the global stability of our ecosystem. A global collapse of the ecosystem was never an evolutionary pressure for evolution.

Humans, like all creatures, evolved to maximize fitness, which essentially means being able to prolong their lineage. In simpler but cruder terms, humans made decisions so that they can have children who themselves can have children. Therefore, our instincts, which drive our logical reasoning, are powerfully biased to focus on gains only towards this goal.

Of course, we have the capacity for abstract thought and thus we have the ability to see the error in our ways, at least if we’re trying to maximize sustainability, but this realization will always be shadowed by our short-term instincts. Thus, if we are to have any hope of reaching sustainability and harmony with other plants and animals, we need to be much more aware of this feature of ours, which is a shortcoming in the context of our overpopulated and unsustainable resource usage.

In short, the rapid development of new technologies fits our short-term brains perfectly, but is antithetical to our long-term survival.

3. Prisoner’s dilemma

The prisoner’s dilemma is a pervasive phenomenon in modern culture. It describes a situation where two people would be better off if neither of them took any action, but because just one person taking the action benefits themselves IF the other person does not take the action, both have to take the action. It’s the game-theoretic description of the classical arms race, and due to our capitalistic society, the prisoner’s dilemma is everywhere.

In the realm of technology, the prisoner’s dilemma occurs because one person using a new technology will bring benefit to that one single person, and so other people use the technology merely to keep up even if in their hearts they believe that using the technology is not right, and goes against their values, or more prosaically, even if that technology isn’t really crucial for them to make a decent living. I once tried to convince the users of Mathoverflow to reject AI and leave the Stackexchange network. The response I got from Joseph van Name was a classic example of the prisoner’s dilemma:

A stand against AI development will simply allow some groups to progress unimpeded while other groups will be stifled, and this effort will result in inequality and less AI safety. If one wants to slow down AI progress, then one should have worked on AI safety earlier, and one now needs to work harder on AI safety. While we may not be able to stop or slow down the development of AI, we can certainly steer the development of AI into a safer or more friendly direction.

Of course, I reject everything said here and I do not believe anyone can steer AI in a more friendly direction with such an attitude.

The prisoner’s dilemma is one of the prime drivers of all technological development. The grass is greener on the newer side, so everyone flocks to the latest and the greatest. Greatest indeed.

Unfortunately, the strategy that is rational for the one-time prisoner’s dilemma in the game-theoretic sense is completely devastating for humanity in the long term. If we are to have a critical, societal-level look at technology, we will need a way to combat the prisoner’s dilemma, introducing new rewards that disrupt the basic game-theoretic tragedy for people so that the rational strategy is no longer rational in the short term.

4. Tech solves problems created by previous tech

Technology introduces all sorts of side-effects, some of them very dangerous. For example, pollution caused by the technology of fossil fuels increases lung diseases such as asthma and cancer. And sometimes, the only way to solve these problems in the short-term is to use even more technology. In the case of lung diseases, it’s medical technology.

Another example is loneliness caused by having so much technology that gets in the way of genuine human interaction. In this case, the technology is the technique of therapy and antidepressants. There may be better ways to solve this and other problems brought about by technology, but creating new technology is often the path of least resistance, which is very attractive to us in our fast-paced world.

Thus, we need people to understand the problems created by technology and to find even more creative solutions to these problems that do not make use of new technology.

5. Capitalism

In theory, I have nothing wrong with capitalism on a small scale, especially when it is sustainable. Sustainable capitalism means an emphasis on local trade, local production, and involvement throughout a local community. It involves a strong ethic of everyone having enough through trading of their comparitive advantages, without the ruthless and soulless drive for endless efficiency that is characteristic of the advanced technology mediated globalization.

On a large scale, capitalism has essentially failed. It has succeeded in making many people comfortable, but it has also produced a large machine of endless growth on a finite planet, and a force of humanity powered by unsustainable technologies like fossil fuels.

In the future, one of the key ways of living more sustainable would be to slowly cause political and economic reform so that the rapid development of technologies for short-term gain is not longer profitable. Capitalism works well in a society where the size of the society is dwarfed by the number of resources and the size of the surrounding habitat, and when the society is relatively homogeneous. In the sense of sustainable ecology, capitalism starts to fail in our case because the reverse is true: the size and complexity of our society dwarf natural ecology.

6. Anti-technology Is Differentially Attractive

On the surface, anti-technology movements or other movements that promote caution against technology are much more appealing to those who especially love nature compared to those who have little exposure to nature or who have never had the chance to experience the beauties of nature.

Thus, although anti-technology views are crucial to oppose the technological organism, anti-technology individuals and groups risk being insular and they risk being limited to existing as a place for those who detest the onslaught of advanced technology.

Even if a fairly large group emerges made up of people who want to be more cautious with technology, they risk merely being a secular version of the Amish who live relatively peacefully amongst themselves but who do not effect change. Although this would make members of the anti-technology group happier in some ways, it would not solve the underlying problem: to prevent the mass of humanity from crushing other lifeforms with the aid of advanced technology and resource usage.

Thus, it is imperative for those who wish to change the world not merely to be content with living away from technology, but also to continually strive to stifle the crushing nature of modern humanity on our biosphere.

7. Livelihoods are Dependent on Technology

Most humans now depend on technology for food and shelter. They don’t just depend on it indirectly in the sense that advanced technology is used in farming and house construction, however. They also depend on it directly because a great mass of jobs are in fact are involved with the creation of new technology, in some way.

For example, the role of marketers is to promote new technology—most of which we don’t need. The role of programmers is to make computer hardware more functional for the technological organism. The role of bus drivers it to drive people to their workplaces so they can create new technology…and so forth.

Like it or not, any job you can get will likely strengthen the technological organism in some way, no matter how much of a Luddite you are (and I use Luddite in the positive sense).

Moreover, our society is set up so that as you get old, you are taken care of through mechanisms that are sustained mainly through technological growth. For example, some old people depend on their retirement funds and those funds are partially increased through investments into technology companies.

8. Inertia

All the previous problems are given great momentum because of inertia. Due to the massive world population, making changes is very hard. Some people cite the world reaction to COVID to prove that we can move fast when necessary. However, the reaction to COVID was within the interests of the technological system. Of course, when the existing system is threatened, it can react quickly.

The same type of reaction can be seen with the Russia-Ukraine war. Pretty soon after the invasion of the Ukraine by Russia, a bunch of the major world economies put an economic strangle on Russia and sent an enormous number of weapons to the Ukraine.

Does this mean that the massive human populace can work together to solve problems? Absolutely not. It proves that when there is a direct threat to technological and economic growth, the system can react quickly to it. But the dangers to humanity caused by the technological system would require an opposition to momentum, which is far more difficult than protecting the system.

Additional Problems

Since the writing of this post, I have come up with some additional problems:

  1. People who love nature may detest technology, but in turn this may make their crusade towards a more enlightened future look more like an outright crusade against technology (which it could be as well). However, people who are aware of technology’s extreme dangers need to work together to find an optimum path towards an enlightened society, which may require subtle thinking with less emotion.
  2. The education of the latest generation in higher educational institutions is overwhelmingly politicized, mostly towards the radical left. Most of them are taught to see the world in very black and white terms. Part of the specific nature of the radical left is to seek comfort in the existing technological system by promoting equity across pre-defined demographic groups in technological development.

Conclusion

In short, the problems facing our technologically strangled society are huge. Solving them requires collaboration, and talent in all areas in order to address these serious problems. In order to be ecologically sustainable, we need to solve all of these problems to move peacefully from our present state into one that does not pose a serious existential threat to every lifeform, including ourselves.

These problems seem insurmountable sometimes, and so that is one reason why we see a variety of rebellious acts against society such as anarchism and revolutionaries. However, simple small-scale revolutions and anarchism aren’t enough to make too much change on their own, or if they are, they might also be chaotic. Of course, various ideas in anarchism or other similar styles may have useful insights.

The most important thing to do if you believe in reform or revolution is to contribute in your capacity to promote skepticism towards technology, growth, and our devastation of the ecosystem for short-term gain.

But in order to solve the problems of our technological society posing a crushing risk to the planet, we need to go beyond single ideologies or strategies. We need to evolve organically with the problem at hand, and carefully instill all these ideas into the mass of our population so that its inertia can move away from endless growth to sustainability.

I welcome any comments!

A recent post from the prominent journal Nature said the following:

A prominent journal has decided to retract a paper by […] a physicist at the University of Rochester in New York who has made controversial claims about discovering room-temperature superconductors — materials that would not require any cooling to conduct electricity with zero resistance.

I am not going to comment on the technical nature of this paper or whether it is fabricated. I am wholly unqualified to do so. Instead, I want to talk a bit about data fabrication. What is the perspective here if we use a critical eye towards technology?

I am not surprised at all that there have been some recent high-profile controversies over faked data. As a former academic, I can tell you that there is an immense pressure to publish and academia, like everything else, is becoming gamified through a variety of means like the h-index, the way grants are funded, and how promotions are given.

Of course, I don’t condone faking and I don’t believe it is right under any circumstances. But neither do I have much sympathy for the academic community either since they have created a system that is mainly for churning out as much technology as possible. In fact, that is one of the main reasons cheating exists: because the system of academia is tuned to push human beings unnaturally into a mode of being machines. We are not designed to be pushed in such a manner, so naturally some people will behave unscrupulously.

That doesn’t mean the person faking isn’t responsible either (again, I am not making any comment about this specific case. That’s still being investigated.) Every person has the responsibility over their own morality, and if a person behaves unscrupulously, that is their decision and certainly it is not merely an effect of a system, even the technological one.

What I am saying is that the technological system encourages unscrupulousness simply by nudging people (through its design, which we have created) into being more machine like.

This perspective gives us insight into the correct way to handle this problem of fake papers. In fact, the problem isn’t just fake papers, but the way academia is structured. This anti-technological analysis gives us the following strategies that we should use to modify scientific pursuit:

  1. Science should not be about creating as much profitable knowledge as possible. If science is geared to provide the kind of technologies that are intended for short-term gain at the cost of long-term problems for humanity, then it will suffer because it is accumulating negative karma.
  2. Science instead should be about the genuine and respectful pursuit of knowledge intended only for living on this planet harmoniously with plants an animals
  3. Thus, we should restructure science by decoupling it from the capitalistic notion of endless publication for eventual profit and for personal fame.

On the other hand, if we simply implement more changes to detect cheaters such as more stringent ethical scrutiny, machine-detection of fake data, and making sure experiments are more repeatable, then we will not have made science much better. Sure, we might have a few less fake papers, but we will also have a system even more tightly controlled that pushes us faster and further down the road of being automatons.

I am steadily growing in subscribers to my anti-technology newsletter. Don’t forget to subscribe! This newsletter is another way I engage with people dubious and doubtful about technology, and I focus more on providing some tips on how to live in a world saturated by technology.

Please check it out! It is not for profit. I don’t put ads in it and I don’t charge money for it. There is literally no financial benefit for me when you subscribe. My only goal is to help create a world where we are more cautious of new technology and we can live still as human beings!

I wrote the following letter to McGill’s Computer Science department:

Dear [XYZ],

My name is Jason Polak and I graduated from McGill University with a PhD from the Department of Mathematics and Statistics in 2016.

I would like to voice a concern and a recommendation. I am concerned with the recent developments of AI. AI, especially the recent developments, will likely do the following to society:

(1) Remove the feeling of purpose from most people’s lives, thereby making most people depressed.

(2) Remove the need that people have for each other since AI will be able to do most tasks.

(3) Concentrate wealth further into the hands of those running Silicon Valley.

AI also will almost certainly have other consequences that we are unaware of. A more detailed analysis can be found on my blog at (https://blog.jpolak.org/six-reasons-why-ai-will-make-the-world-worse/), and other posts there. Also, you may consult a few other texts such as David Skrbina’s “The Metaphysics of Technology”.

I propose that your department develop an ethics investigation as to the dangers of AI, and if you find any such dangers, I recommend that your department cease supporting AI research.

I know this seems unusual, but I truly believe that the disruptions to society caused by AI will be to a degree that is an order of magnitude more than any other technological development. If you don’t believe me or find my claims outlandish, please at least carefully read them and some of the texts written by eminent philosophers on this matter.

Sincerely,
Dr. Jason Polak

If you have concerns about AI and its effects on society, I suggest you email as many computer science departments and companies as you can!