One may think that the questions concerning the ontology of technology and the metaphysical relationship between humans and technology have become ardent only now, since it was just in the last 10-20 years that Artificial Intelligence proved that it can in practice compete with and replace humans in numerous activities. But this discussion actually started long time ago, being particularly marked by the ground-breaking writings of Marshall McLuhan and Raymond Williams. In their works The Gutenberg Galaxy: The Making of Typographic Man (1962), Understanding Media: The Extensions of Man (1964) and respectively Television: Technology and Cultural Form (1974), the two thinkers developed two main opposed visions regarding these questions. McLuhan is considered the father of technological determinism, a perspective arguing that, by the very fact that technology mediates between man and reality in a certain way, it affects his perception and bodily experience, becoming “an extension”. At the same time, given that this inevitable mediation depends to such a great extent on the design of each technology, the content of the reality that it seeks to convert for the human being becomes quite unimportant – if one truly becomes aware of the primacy of the mediation process, he realizes that “the medium is the message”. To this, Raymond Williams replies that technology in itself does not carry any essential message, but that its functions, development and aims can only be analyzed in a historical, cultural, social and economic context which determines it. The modifications in usage of a technology are strongly linked to class interests, political and economic necessities, and social trends, but there is nothing predetermined in it. In general, technology reflects already existing patterns of consumption and power relations in a society.
Although there is more to say about both positions, their presuppositions may still seem unsophisticated for the theorist of technology today. The thinkers are arguably radical in their attempt to ultimately reduce an entire phenomenon to just one explanation. Nevertheless, there is something substantial about their intuitions and insights, which was proven over time by the fact that they opened the path for a debate that continued until now and which fundamentally rests on the directions inaugurated by them. When it comes to McLuhan, we now encounter in the literature many forms of softer or harder determinism which continue to raise awareness about the impact the design of technologies has on us, independent from the functions we can or we think we can freely attribute to it. One of the most interesting theories in this respect is the mediation theory introduced by Peter-Paul Verbeek. Indeed, more (tehno)pessimistic thinkers, such as Don Ihde, or very (tehno)pessimistic thinkers, such as Jacques Ellul, who witnessed the rise of the Internet and the more surreal developments in Artificial Intelligence or who perceived a general renunciation of personal autonomy in the era of Big Tech and a potentially catastrophic fusion between an authoritarian political regime and social media monopolies, have only taken McLuhan’s extreme arguments further. They argued for the possible existence of an autonomy of technology, capable of self-determination, according to its own rules and independently of any human intervention. In the case of Williams, the part of his argument according to which technology is value-free, its aims being necessarily determined by human beings, has been developed into what is now known as the Value Neutrality or Instrumentalist Thesis. According to it, technology is not “an extension of man” in the stronger sense; it remains a tool, arguably one with particular characteristics, but which man can permanently decide how to use, by assuming complete responsibility for its aims. The fact that the technological artefact affects one’s decision or even perception of reality is not denied, but it usually represents a factor among the others which do not change in any sense reality as it is. Just like in the case of personal experiences, education, natural inclinations, virtues and vices, we are capable of exercising and controlling the ways in which technology defines our behavior.
Having introduced the central issue of this essay in a succinct manner, I will further argue that a thorough analysis of the relationship between humans and technology must necessarily involve arguments and insights from both perspectives described, the choice between the two being much more difficult to make than it may seem at first sight. In replying to the objections brought by the other side, each side managed, at least up to this point, to bring quite compelling and systematic arguments, the dispute being far from being settled. More exactly, this paper will deal with “the weaker” versions of each position, which, to me, are more fruitful due to the fact that they are not focusing on just one aspect, thus simplifying the discussion, but they are concerned with specific nuances that nevertheless make a significant difference. I will try to analytically reconstruct these arguments and critically explore them, by making references to the aforementioned thinkers and several others, without being interested in a historical or chronological analysis, but rather focusing on the most stringent tensions concerning our problem. Finally, I hope I will manage to show why it is so crucial to remain receptive to both stances from a theoretical perspective but, sometimes even more important, due to more practical and prudential reasons.
Following McLuhan, Peter-Paul Verbeek is one of the most renowned proponents of a form of technological determinism called Interaction Theory.1 This theory holds that, given that technology has such a significant impact on the way in which we perceive and experience reality, the traditional relationship between object (technology) and subject (human being) simply does not reflect the extent to which we are shaped by the former. Instead, the thinker argues that this link is much more accurately described as a continuous interaction, which is a process that takes place between humans and technology that are not seen as two different entities, but rather as the results of these repeated interactions. We, humans, end up simply having a different perception of reality after the encounter with technology, reason why each time someone is designing technology, he is not creating just an artefact, but is sketching the very parameters of a relationship between us and the world. A classification of the concrete ways in which this interaction takes place is offered by Don Ihde in his now famous book Technology and the Lifeworld: From Garden to Earth (1990). In fact, Verbeek is building on Ihde’s postphenomenological approach to technology, but in a more technical manner and without the Heideggerian and Husserlian jargon. Of course, the best example that can support the validity of this perspective on technology is social media, which has created through algorithms types of social relationships and individual habits that were not necessarily intended neither by the designer nor by the users. Now, these new types of social interactions and of behaving are dependent on this particular technology and have extended outside of it. If we look at them, the relationship of man with technology turns out to have increasingly unclear borders.
There are arguably many problems with this argument that we will definitely deal with later, but I think that, despite them, some of the practical implications which follow from this perspective are not only legitimate, but even crucial for the extent to which technology has evolved in our times. The phrase technological determinism does by definition seem to imply that, once we interact with the technological medium, with or without being aware of it, all our actions and perceptions are irremediably shaped according to its form and there is even the possibility of a “flip or reversal in which the human users of digital media become an extension of those digital media”2. However, Verbeek does not believe that from our becoming aware of the necessity for technological mediation and from our acceptance of a mixture between technological form and empirical reality a lack of responsibility for our choices or for the way in which we apply technology follows. One of the main purposes of this thesis is to bring into wider attention the fact that those who are responsible for the design and structure of technologies are, through their choices, actually putting together pieces of a certain way of looking at the world. A designer of technology can choose to emphasize or not user privacy, he can choose to provide users with more or less security, conflict can be encouraged or not on social media, the search and use of personal data can be desired or not – these are all reflected in the way in which that technology can be used „by default”. The space for choice and action offered by a technology is, therefore, predefined to a great extent, its functioning parameters being the result of (literally) existential choices, which become irreversible for users due to their technical complexity. In turn, the room for personalization is itself given.
The simplest example would be the audience of a social media post: though there will always be alternative options for each user (share with friends, with everyone, with certain persons, etc.), one of them will always be implicit. Thus, the moment when the user notices that sharing with everyone, friends and strangers, is encouraged, he will need to make the effort to choose the option that will provide him with more privacy. Of course, the effort itself is minimal – just one click away –, but this nevertheless shows us something about the way in which that virtual space is conceived and the rules encouraged inside it. Maybe, later on, the same user will discover that, for example, texts that go beyond 100 words are automatically discouraged by algorithms and only those oversimplified fragments of communication are being promoted, which diminishes the probability that the author will express a reasoned opinion. Bringing all these details together, we could arrive at the conclusion that the virtual space we inhabit nudges us towards ways of interacting with people, online behaviors, types of writing, modes of sharing our own life which, in the absence of technological mediation, we would be much more cautious about. Of course, the very possibility we have to reject these structures testifies our liberty in the virtual world, but still, how is the common user, more exactly, encouraged to understand the subtle mechanisms behind this space? How transparent are all these things to him and what is his (real) liberty to challenge or even replace them?
Moreover, this discussion points to a problem that is, to my mind, rarely taken seriously by a common user, namely that the mission of technology designers and technicians, be they engineers or scientists, does not in fact consist of “merely technical”, axiologically neutral, problems that they were hired to solve. Engineers and scientists alike are moral agents who infuse values in their creations, act that must be taken seriously and therefore given much more attention than it is usually presumed in the technology-related fields. In Verbeek’s words:
“And rather than seeking for autonomy against the powers of technology, we should seek to develop responsible forms of mediation. Users, designers, and policymakers should be enabled to read, design, and implement technological mediations, in order to be able deal in a critical, creative, and productive way with powers that remain hidden otherwise. Human freedom cannot be saved by shying away from technological mediations, but only by developing free relations to them, dealing in a responsible way with the inevitable mediating roles of technologies in our lives.”3
In an era where technological development and usage seem impossible to stop or just ignore, perhaps this solution that emphasizes freedom inside the close relationship with technology, rather than away from it, becomes more realistic. At the same time, it cannot be denied that Raymond Williams’s argument about certain political and economic interests that control technological production and dissemination remains true, and since these leading forces will always exist, perhaps a better strategy would be to try to influence them by this change in the approach to technology and morality that should be reflected in the wider climate of opinion. In turn, this will inevitably put pressure on those who exercise significant social influence.
Another aspect that I consider worth taking into account in any analysis of technology that pretends to be exhaustive and realistic, and which this family of theories that includes Verbeek’s emphasizes more than the opposite position, is the more material and in a (paradoxical) sense natural aspect and influence of technological artefacts on humans. Although from an epistemological and metaphysical point of view, the primacy of form that obscures the content almost completely is highly problematic, this perspective helps us recognize that human beings are technological in a more profound sense, by the fact that they have always invented tools, be they conceptual or technological, in order to externalize specific aspects of their human capacities to bring about knowledge, a more comfortable life, and cultivate efficiency, rapidity and craftmanship. All these tools discovered or invented by man always had and always will mediate between him and reality. McLuhan was then right to use a very broad definition of media to (hyperbolically) show that reality has always been mediated in a certain way, despite the impression we have that the social change brought about by recent technological development is a purely modern phenomenon. In his words, more relevant is the “new scale that is introduced into our affairs by each extension of ourselves, or by any new technology.”4
Many forms of soft determinism have actually taken over and developed this idea that was only touched upon by McLuhan, now the argument being that it is probably far-fetched to think about any little technological development in an agricultural, industrial or digital society as producing an individual or social fundamental change. Nevertheless, we can talk about degrees of technological determinism that can be “historically specific to a degree of technological complexity in a given cultural frame.”5 Once again, though from a strictly theoretical point of view a McLuhanite approach risks promoting a “whole confusion of form and content that is dangerous epistemology, since it is yet another force disrupting harmony and leading to excitable action”6, what I think that we should take from all these ideas is the acknowledgement of the fact that there are crucial practical implications that come from the design of technology. To a more or less extent, our “technical” work is imbued by some values and ways of seeing the world which inevitably orient users towards certain usages rather than others, without determining their final choices. “To a more or less extent” can mean that in some cases the moral values are more obviously built into the technological artefacts, which may become good or bad in themselves7, while in other cases we deal with a rather cold media8, which provides the user with a greater degree of participation and freedom. In this sense, when dealing with complex real-life situations, it is much more realistic to embrace the ambivalence of technology, rather than its total neutrality, following Winner in arguing “that specific artefacts are [or may be intentionally conceived as] value laden, but not technology as such”.9 After all, the aim of a theory or a theoretical position must not be that of obstructing our view of the contingent reality – its abstract limits must be recognized and overcome when we are simply trying to see reality as it really is.
Although there are many other aspects of technological determinism that contribute to a more thorough understanding of the metaphysical and epistemological relationship between human beings and technology, I will underline just one more before moving to the contemporary versions of the Value-Neutrality Thesis. The argument that I am referring to provides us with an image about technology which, in a way, is more phenomenological, because it seeks to catch in more detail the experiential aspects and the more subtle “sub-processes” involved in man’s encounter with technology. Hans Oberdiek suggestively and points out the idea that I want to introduce:
“‘Technology’ cannot be adequately understood if one thinks only of its material products: tools, machines, and devices generally. Those who define technology narrowly usually wish us to see advanced technology as no different in kind from a stone-hammered flint arrowhead: the products of modern technology may have a more complex structure and be put to more sophisticated uses…”10
Indeed, it can definitely be argued that, in principle, the “more complex structure” or the “sophisticated uses” of technology do not really change its ontological status that remains separate from that of humans. As the fragment about itself shows, we are still thinking about technology in a rather functional paradigm. However, in reality, Oberdiek is right to point out that our interaction with technology involves many other aspects that he describes in detail, which show that, practically speaking, we should not imagine scientists as persons who deny the existence of value-laden technological artefacts, although the research process must be as neutral as possible. Similarly, we should not picture engineers as persons who robotically assemble technologies following a given plan without being able to foresee and, if needed, even stop their potentially destructive applications. The author describes technical artefacts as being in the foreground of technology, but he insists that not any type of creation can be included in this category – only those which require a rational discipline to both make and use. Moreover, he includes on the list the know-how and technique that a person must possess and permanently exercise when creating a particular artefact.
To my understanding, this knowledge involves some generic, fixed steps, but, at the same time, there is a tacit dimension which springs from that person’s whole work experience. One is not simply tabula rasa, unable to compare and observe intrinsic peculiarities of numerous artefacts. He is in principle able to detect the potential, multiple functions of his creations and to understand many of its possible consequences. Finally, the author stresses the existence of certain theories and, I would add, paradigms, which inform a particular technology, the comprehension of which depends on the creator. Indeed, “The degree of comprehension of the relevant theories inevitably affects one’s understanding of the technology in question.”11 This only shows the (apparently banal) fact that the degree to which each technology designer is familiar not only with the scientifical and technical aspects, but also the social and moral consequences of the ways in which his creations are conceived is up to him or, more specifically, up to his conscience.
Technological determinism, broadly speaking, accuses the Value-Neutrality Thesis of oversimplifying the interaction between man and technology because the former is more likely to focus on raising serious concerns about technology design and the human responsibility intrinsic to it. Despite that, from a metaphysical and epistemological point of view, McLuhan and his followers are attacking certain traditional distinctions between object and subject or content and form, which, theoretically speaking, are not that easy to undo and, which, from a practical perspective, if their destruction is followed in to final implications, may have many negative unintended consequences. This is also part of the argument advanced by Martin Peterson and Andreas Spahn, who, against Verbeek, propose a position which they call “The Weak Neutrality Thesis”12. It basically claims that “technological artefacts sometimes affect the moral evaluation of actions, although these artefacts never figure as moral agents or are morally responsible for their effects.”13
First of all, Peterson and Spahn deny the anthropomorphizing of technology advanced by Verbeek who claims that there is an “active” way in which technology shapes us. For example, when we are using a camera, it is true that it can zoom in or zoom out some aspects of the reality in front of us, but in this situation, it is definitely our perception that is changed, and not reality itself. Probably a much more relevant example here would be the internet and social media that, as it was underlined above, give birth to new types of social interactions and ways of behaving. Without denying this and without contradicting the thesis advanced by the proponents of soft determinism that a society in which the use of the internet is a very widespread phenomenon may suffer cultural and social changes on a large scale, it is clear that these technological entities are still passive, only the designers or the users being the ones who actively decide to produce and respectively use them in a certain way. Still, let us also try to briefly analyze one of the social interactions that is considered to be newly introduced by the internet.
Many people are not meeting for the first time in real life anymore, but in the virtual world, and they get to “know” each other through the medium of social media that definitely has a big influence of the development of their “relationship”. It is even possible to only remain “friends” with someone on these platforms, without having to ever meet that person in real life. No matter how widespread this phenomenon is, I think it represents a (quite banal) confusion. The fact that one has the possibility to both start and continue a permanent relationship with a person that is fully mediated through the internet means exactly this: that it is a virtual communication and not a real-life relationship. One can argue that sending messages or pictures of each other is a form of getting to “know” someone, one can have a clue about how that person thinks or how that person looks like, and those clues can turn out to be consistent with reality. But exactly due to the fact that this mediation primarily affects our perception, it is equally probable to find out that a person just seemed to be in a certain way on the social media and a direct interaction with him reveals his true traits. Again, no matter how present this situation is in our lives and how much we have gotten used to it so as to consider it “part of reality”, it simply does not change the ontological status of technology.
Another crucial point that is usually advanced by technological determinists refers back to Oberdiek’s critique that instrumentalists reduce technology to artefacts. They claim that the values, knowledge and know-how that we invest in designing and using technology once again blurs the distinction between an active subject and a passive object and determines us humans to have a much more intertwined relationship with technology. Think of the computers that we have put so much effort into to make them more and more advanced and think of all the different operations that are made possible only by using it and on which millions of people are nowadays highly dependent on. If we, humans, will simply disappear one day, these things will just stop working because all the knowledge about their functioning is stored in our human brains. Therefore, there are no sufficiently strong premises for negating the traditional distinctions between subject and object and for ascribing technology agency and intentionality. On the contrary, here we are rather being proved that it is human beings that master all types of technology and that we can, from a theoretical point of view, abandon it or change it at any time.
On the same note, if we follow Per Sundström’s argument, it is true that, once it is conceived by the human mind, technology will actualize its potential powers and effects. Because of that, it is also true that technology ends up embodying certain values. For example, it cannot be denied that, right from the start, guns are conceived to wound or kill while respirators are not. We can go as far as considering that these aspects of their design and functionality have already some fixed infused values and aims inside them, although this thesis remains very problematic.14 But the author argues that, even if we accept all these observations, before any technological artefact is applied in practice, there is always a residual neutrality linked to its usage.15 This is possible because its simple existence, infused with values or not, does not have any agency, it cannot “decide” what its final aim will be. Our human capacity for inaction and for saying “no” is the one that can choose to stop actualizing already infused values or to countervail them by finding a distinct aim for the product. There is no such thing as a technological imperative that impels us towards taking a certain moral decision, although it can definitely be argued that we are highly influenced by the available technologies, just as much as we are influenced, for example, by our social background or by an addiction (such as drinking too much alcohol).
One final aspect of this second position will be discussed before reaching some general conclusions. All theoretical distinctions being kept, how can the Value Neutrality Thesis argue against Boaz Miller’s convincing and sophisticated argument according to which the death camps used by the Nazi against the Jewish population were intrinsically and unambiguously evil? In other words, how can this thesis account for some exceptional cases in which the evil produced with the help of technology was so monstruous and on such a big scale, especially when people like Albert Speer used, among other arguments, the presumed neutrality of technology to deny any moral responsibility for his deeds? The short answer is that exactly because, as we already argued, technology will inevitably end up embodying certain identifiable values and that people necessarily use any kind of tools and artefacts with a certain aim in their minds, the Value Neutrality Thesis cannot be used as an argument against human moral responsibility, but strongly in favour of it – for explaining why one person did a good thing and another one a bad thing using the same object.
If we go back to the problem of technology design, the answer becomes more complicated, but not impossible. First, as already said, technology designers must understand that due to the fact that technology is not value-neutral and does not have any kind of agency, our contact with it infuses it not only with our technical, but also with our moral knowledge and intuitions. However, both our individual knowledge and intuitions are limited, thus a significant neutrality space will remain to be filled by the users themselves, whose behavior and knowledge will most likely determine different aims or assign different values that perpetuate or countervail the ones already imbedded in the product. In this way, “Unintended consequences inconsistent with the values allegedly embedded in a technology mean that they are not embedded in it after all”16 – this will always be the axiological and teleological sense in which technology will remain neutral. Indeed, this argument shows that technology is perhaps not neutral in the hard manner in which it is often portrayed in the literature. Beyond the sphere of this problem, there are many debates going on in which the total neutrality of both science and technology are questioned, without the objectivity of their results being denied.
Second of all, designers must also be aware that any artefact that they design will surely be used for a certain aim and that their own experience provides them with the capacity to foresee a range of possible applications, although it will never be possible for a single human being to grasp all of them. It is indeed absurd to think that designers who are responsible for more sophisticated technologies are simply unaware of the aim of their technical work. It is true that nowadays most people work in the extended division of labour, quite alienated from the bigger picture and from the final purpose of their activity, but this does not seem to be the case of someone who was in the position of Albert Speer. If we use the short and incomplete analysis undertaken here, then we can perhaps try to sketch an answer to Miller’s question: The Holocaust that was possible with the extensive use of technology did not take place because the latter was mistakenly conceived as neutral, but because all the possible inputs brought by humans that influenced the making and the using of this technology were evil: the aims, the values, the actions of people contributing to its design, who were not in important positions and who were in many cases obliged to work for a dictatorship that did not allow them to foresee or to freely infuse their own, better values in their work, and, finally, the users. Inside a dictatorship in which all truth becomes lie and all normal activities undertook by people are considered reactionary, technology was just another victim of maltreatment.
To conclude, in this article I tried to present the less hard versions of technological determinism and value neutrality thesis. My main goal was to show that, when two extreme positions are more nuanced, they become, to a significant point, complementary, and they offer us a much more thorough and detailed image on the phenomenon we want to observe. In the case of the first position, I consider that it is fundamental to engage with it because one of its major practical implications is an increased moral responsibility assigned to designers of technology. In this view, they are portrayed like real human beings who are conscious of what they are doing and who, through they work, are inevitably creating certain values and offering possible paths of using a technological artefact. Although this hardly removes the ontological distinction between technology and human beings, it is also important for all individuals, be they designers or users, to become much more aware of the material and natural properties of technology in order to understand how they shape our perception and mediate between us and reality. In the case of the second position, it is crucial to keep as strong as possible the separation between perception and reality because this is the only one which ensures us the possibility of some objective criteria for our moral decisions and reality represents the only thing that can allow us, human beings, to relate to one another by speaking of something that is independent of our minds. Although neutrality should never be understood in a hard sense, this position also places on us moral responsibility, by making us assume and define the aims, intentions and values that we want to promote and through which we want to make a change in the world, using the latest technologies.
NOTES
- Verbeek, P. P. (2015). “Cover story: Beyond Interaction: a short introduction to mediation theory”, Interactions (Association for Computing Machinery), 22(3): 26-31. ↑
- Logan, R. K. (2019). “Understanding Humans: The Extensions of Digital Media”, Information, 10(10):304. ↑
- Verbeek, p. 31. ↑
- McLuhan, M. ([1964] 1967). Understanding Media: the Extensions of Man, Sphere Books, London, p. 15. ↑
- Lister, M., Dovey, J., Giddings, S., Grant, I. and Kelly, K. (2009). New Media: A Critical Introduction (2nd ed.). London and New York: Routledge, p. 96. ↑
- Wagner, G. (1967). “Misunderstanding Media: Obscurity as Authority”, The Kenyon Review, 29(2):255. ↑
- See Boaz Miller’s argument on the intrinsic evil of death camps in Miller, B. (2020). “Is Technology Value-Neutral?”, Science, Technology, and Human Values, 46(1):53-80. ↑
- Here I refer to McLuhan’s distinction between hot and cold media from the second chapter of Understanding Media: the Extensions of Man: “… hot medium is one that extends one single sense in ‘high definition.’ High definition is the state of being well filled with data. A photograph is, visually, ‘high definition.’ A cartoon is ‘low definition’, simply because very little visual information is provided. Telephone is a cool medium, or one of low definition, because the car is given a meagre amount of information. And speech is a cool medium of low definition, because so little is given and so much has to be filled in by the listener. On the other hand, hot media do not leave so much to be filled in or completed by the audience. Hot media are, therefore, low in participation, and cool media are high in participation or completion by the audience.” (pp. 22-23). Although this distinction and, I would say, McLuhan’s work in general, is highly intuitive and literary, rather than systematic and theoretical, nevertheless I think that this particular distinction (and other parts of his entire work) can be reinterpreted more systematically and be made very relevant when referring to the difference between technologies that are very obviously built in such a way as to massively direct human behaviour and those that seek to be more neutral by their default design. ↑
- Miller, pp. 4-5. ↑
- Oberdiek, H. (1990). “Technology: Autonomous or Neutral”, International Studies in the Philosophy of Science, 4:1, 67-77. ↑
- Oberdiek, p. 69. ↑
- Peterson M. and Spahn A. (2011). “Can Technological Artefacts Be Moral Agents?”, Science Engineering Ethics, 17(3):411-424. ↑
- Ibid., p. 412. ↑
- Take, for example, the case of a child who finds a respirator in his house and accidentally swallows a piece of it or strangles himself with his band. Or to refer to a frequently evoked example, namely the pen that can become a white arm or of the criminals who in many cases use very unusual objects (sometimes whatever they have at hand) to kill people. These situations show it is perhaps more accurate to refer to a more likely aim that a certain technology will be used for, which in turn can stem from a certain infused value (such as safety or autonomy). ↑
- Sundström, P. (1998). “Interpreting the notion that technology is value-neutral”, Medicine, Health Care and Philosophy, 1: 41–45. ↑
- Miller, p. 11. ↑
BIBLIOGRAPHY
Ihde, D. (1990), Technology and the Lifeworld: From Garden to Earth, Indiana: Indiana University Press.
Lister, M., Dovey, J., Giddings, S., Grant, I. and Kelly, K. (2009). New Media: A Critical Introduction (2nd ed.). London and New York: Routledge.
Logan, R. K. (2019). “Understanding Humans: The Extensions of Digital Media”, Information, 10(10): 304.
McLuhan, M. (1962). The Gutenberg Galaxy: the Making of Typographic Man, Toronto: University of Toronto Press.
McLuhan, M. ([1964] 1967). Understanding Media: the Extensions of Man, Sphere Books, London.
Miller, B. (2020). “Is Technology Value-Neutral?”, Science, Technology, and Human Values, 46(1):53-80.
Oberdiek, H. (1990). “Technology: Autonomous or Neutral”, International Studies in the Philosophy of Science, 4:1, 67-77.
Peterson M. and Spahn A. (2011). “Can Technological Artefacts Be Moral Agents?”, Science Engineering Ethics, 17(3): 411-424.
Sundström, P. (1998). “Interpreting the notion that technology is value-neutral”, Medicine, Health Care and Philosophy, 1: 41–45.
Verbeek, P. P. (2015). “Cover story: Beyond Interaction: a short introduction to mediation theory”, Interactions (Association for Computing Machinery), 22(3): 26-31.
Wagner, G. (1967). “Misunderstanding Media: Obscurity as Authority”, The Kenyon Review, 29(2): 255.
Williams, R. (1974). Television: Technology and Cultural Form, London: Fontana.
Image: Pixabay
Un comentariu la „On the Subtle Effects of Technology: Finding the Middle Ground Between Technological Determinism and the Value-Neutrality Thesis”