From “who can think” to “what can think”: Uncovering AI through history

Blog

HomeHome / Blog / From “who can think” to “what can think”: Uncovering AI through history

Jun 04, 2023

From “who can think” to “what can think”: Uncovering AI through history

You may have heard of Elon Musk’s warning to humanity on the dangers of

You may have heard of Elon Musk's warning to humanity on the dangers of artificial intelligence by now. Regardless of differing individual reservations toward the Silicon Valley giant, it is undeniable that our construction of reality is now shakier than ever. As technological developments unfold at such an uncontrollable, irreversible rate, it is necessary to reflect upon the numerous prescient warnings that scholars have issued in the past several decades.

The notion that machines can perform what was thought to be exclusively within the capacity of the human mind dates back to mathematician Alan Turing's 1950 experiment, dubbed The Turing Test. In Turing's experiment, participants guessed whether the sender of the messages they received on a computer terminal was a human or a machine. Turin asserted that if machines can mimic and act as enormous storages of human consciousness, they can in fact become human beings; the subsequent merging of the machine and human intelligence creates the "cyborg." In the words of American literary critic Katherine Hayles, "You are the cyborg, and the cyborg is you." Consequently, the liberal subject, widely regarded as "the human" since the Enlightenment, now becomes "the posthuman."

Decades after Turing's experiment and Hayles’ argument, the modern iPhone creates and remembers hundreds of complicated passwords for you. Your iPad stores your notes for class and responds to your voice. Your Apple Watch measures your heart rate and tracks your calories. Such easy-to-access resources make you unable to envision a life without electronic devices. This reliance, while positive in many aspects, becomes eerie when imagining two versions of yourself existing: one composed of blood and flesh and one in the form of signs and symbols in an entirely digital environment. When it now takes a matter of seconds to reach robots that can instantly fashion complex, well thought out essays, it seems safe to conclude that technology can no longer be meaningfully separated from the human subject.

In this new posthuman paradigm where information escapes the flesh and materiality is rendered obsolete, it seems integral to err on the side of caution when taking advantage of Silicon Valley's brainchildren. It is important to actively research both the scientific facts that reveal the tangible impacts of AI as well as literary texts that reveal the complex social, cultural and political issues humanity faces as a consequence of technological development.

Does this new revelation mean that humans can now treat their bodies as mere fashion accessories? Not necessarily. In her 1999 book "How We Became Posthuman," Hayles describes an ideal posthuman world as one that "embraces the possibilities of information technologies without being seduced by fantasies of unlimited power and disembodied immortality." In addition, Hayles also notes that this world ought to "recognize and celebrate finitude as a condition of human being." This vision seems to have manifested into reality, as organizations where power is most concentrated seem to boast their sophisticated technologies and virtuality; the Pentagon, for example, currently views itself as an "unprecedented theater" in which wars are fought. With this being said, the concerns raised by contemporary scholars make Hayles’ vision all the more difficult to accomplish.

Two decades following the publication of Hayles’ prescient post-humanist intervention, sociologist and Princeton University professor Ruha Benjamin coined the term "the New Jim Code." This idea refers to a range of discriminatory designs in technology that explicitly work to amplify hierarchies and replicate social divisions. Simply put, technology that permeates almost every crevice of the contemporary human experience can replicate and exacerbate systemic inequalities, sometimes putting on a deceiving, feel-good facade that seems to promote the contrary. As Benjamin correctly pointed out, there is a plethora of new applications that embody this code in the status quo.

Beauty AI, an initiative developed by a variety of personal health and wellness organizations in Australia and Hong Kong, promoted itself as the first-ever beauty contest judged by robots. The app requires contestants to take a selfie and have the photos examined by a robot jury, who then selects a king and queen. While the robot judges were programmed to assess contestants based on wrinkles, face symmetry, skin color, gender, age and ethnicity, the creators of Beauty AI expressed that their "robots did not like people with dark skin" in 2016. All but six of the 44 winners were White, supporting the growing concern that algorithms were biased due to the "deeply entrenched biases" of the humans who created such machines.

While one may question why a case study of a beauty application bears serious implications for the future of humanity, the reality is that the impact of these international efforts has reached the offices of Silicon Valley kingpins. For one, the tendency of tech companies to selectively absorb the "acceptable" aspects of Black culture and abandon the remnants altogether raise concerns on how biased humans ultimately create biased machines. In her book, Benjamin recalls an anecdote from a former Apple employee who describes his experience developing speech recognition for virtual assistant Siri at the company. While his team developed different English dialects such as Australian, Singaporean and Indian English, Apple did not work on the African American Vernacular English dialect, as his boss stated that "Apple products are for the premium market." Ironically, this happened in 2015, just one year after Apple purchased African American rapper Dr. Dre's headphone brand Beats by Dr. Dre for $1 billion. Benjamin's anecdote seems to emphasize the tendency of powerful companies to somehow devalue and value Blackness simultaneously, a trait directly passed on to developed machines enormously smarter than humans.

When technology labels individuals solely based on the ethnic implications of their names, it is not difficult to envision similar forms of technology impacting individuals in all aspects of life. From airport screenings to housing loans, job applications to online shopping, technology impacts users and their quality of life. Under such circumstances, the posthumanist argument is only further buttressed and the ideal posthuman world further undermined. While it may seem a bit too early to ponder a "mass extinction" or "the destruction of civilization," interventions made by scholars time and time again warn us of the irrevocably extensive, convoluted results humanity could reap as technology intersects with the flesh.

From Seoul, South Korea, So Jin Jung is an Opinion Columnist with a passion for politics and journalism. She can be reached at [email protected].

Please consider donating to The Michigan Daily