ChatGPT, Gemini, Jasper, Meta AI, and plug-ins like Grok are among the umpteen artificial intelligence (AI) apps that are almost synonymous with information processing, production, and presentation. This resembles the ‘search engine’ revolution/frenzy that stormed the Web nearly 25 years ago. But can intelligence be ‘artificially’ activated or simulated, and how much? Further, how diverse, inclusive, and realistic is AI input and output?
Mostly, for-profit Information and Communications Technology (ICT) entities in/from economically stable countries produce AI tools and plug-ins, presently. White or Savarna (oppressor caste) cisgender-heterosexual, ableist, privileged class men mostly start, own, and run these organisations and largely employ similar men. Usually, these persons have the required social capital, financial wherewithal, and technical education necessary to obtain and sustain ICT jobs, creating AI apps/tools and plug-ins.
Many of these socio-economically advantaged persons are biased and ignorant about communities and identities marginalised by gender, sexuality, SOGIESC, race, ethnicity, region, caste, religion, class, disabilities, language, occupation, age, etc. Consequently, textual and graphic information provided by AI apps and plug-ins has insufficient inclusion and diversity. This is an algorithmic bias rooted in personal bias. Further, AI tools lack sufficient features/options to portray marginalised identities visually/graphically. This proves that exclusion is systemically enabled and technologically perpetuated.
Technological Evolution or Dangerous Regression?
Earlier, AI was a fuzzy concept with limited usage and availability. When Ritash studied AI in the 1990s, it was linked to Robotics and Decision Support Systems (DSS). However, now it is intriguing, addictive, ubiquitous, and seemingly omniscient. Ritash tries to avoid it as they hardly need it. While AI isn’t bad, extreme and unchecked usage and reliance on it have become widespread with unavoidable consequences. This unsavoury fallout of misusing AI pertains to the misrepresentation of facts, manufacture of data, and violation of social and professional ethics. These are among the reasons why the usage of AI is being regulated, especially in research, news media, and healthcare.
Comparison and speculation on using AI instead of books and search engines as a similar technological shift in the case of cameras replacing painting, and the problems associated with the ethics of data and generation are very different. There is also a layer of white centric oppression and projection of homogeneity with the use of generative AI on many levels. From image-based outputs to DSS, it creates ease and comfort for the identification of individuals who do not belong to certain binaries or majorities and places them in a space of danger. It also gives us an idea of how many times the decision makers come in with ableist and white-centered perspectives, and thus language change and accessibility are always applied to technology as an afterthought and not a compulsory standard for a product to be usable.
Underrepresentation of Intersectionality and AI as a Socio-technical Tool
Before AI, too, life was difficult for most minorities. While reading Sasha Costzana-Chock’s paper, Lipi learned of ‘microaggressions’ – a subtle technological/physical shift that briefly questions your individual positionality in the world. As brown people, we experienced this while learning MS Word (in schools during the 2000s), where typing our full names would display a red ‘error’ squiggle underneath. Sasha describes an airport millimeter scanning machine humiliating a transwoman, by flagging an error as its binary, cis-normative coding de-recognised her transitioning body in 2010. Now, imagine how much gen AI can monitor our bodies.
Software programming/coding languages have existed for nearly 150 years. Among the different coding languages, we know, HTML, used for websites, seems the simplest. But to understand HTML, one must know English well. Hence, Software development remains in circles of people who have access and the means to learn not only the methods of code, but also have a great deal of understanding of English. This, indirectly, step by step, adds a layer to how language in itself becomes a tool of oppression, along with the added layer of technology in which it is coded in. In 150 years, how could there be little to no developments in indigenous ways of producing code for technological development in minimum single indigenous language? The other aspect of language that plays a huge role in misconceptions and missing information regarding technology is the over-complication of terminology used in tech. So much so that people give up understanding the tools they interact with daily. This causes serious data theft threats and frauds personally, and immense data is being collected from interactions with these AI tools that feed our algorithms with white-centric, majoritarian and damaging ideologies, risking technological colonisation.
In the book ‘Against Reduction Designing a Human Future with Machines’ in the chapter ‘Making Kin with the Machine’, Jason Edward Lewis, et al discuss inviting AI into the ‘kin’ of human life, following the co-existence and righteousness from many different perspectives of many indigenous people and communities. It discusses the fears and possibilities of inviting AI into our lives and accommodating it. Human experience and intelligence make AI what it is. Humans are selfish in recognising other intelligences in our surroundings, but treat their own lived experiences and thinking skills as the highest intelligence; much of this pride is linked to taking agency and communicating our intelligence in languages people understand. Yet, why is AI a trusted companion recently? What are AI companies trying to sell us? Well – certainty. As humans fear uncertainty. Of a product’s success, grades, advertisement campaigns etc. One would consider OpenAI, among the biggest AI companies, would use their own ‘certain and confident’ AI models for campaigns. But the latest advertisements, shot on 25 mm film by one of the best actors and cinematographers, say the opposite. It makes you realise and question their lies.
No intelligence and algorithms can beat human connection and genuineness.
Ristash is a gender fluid, digital Queerosaur and LGBTIQAP+ peer supporter who enjoys penning verse and camera tricks.
Lipi is gender fluid, belonging to the LGBTIQAP+ community, ux designer, researcher, and illustrator.
AI: Artificially Invoking Underrepresentation andMisrepresentation?
ChatGPT, Gemini, Jasper, Meta AI, and plug-ins like Grok are among the umpteen artificial intelligence (AI) apps that are almost synonymous with information processing, production, and presentation. This resembles the ‘search engine’ revolution/frenzy that stormed the Web nearly 25 years ago. But can intelligence be ‘artificially’ activated or simulated, and how much? Further, how diverse, inclusive, and realistic is AI input and output?
Mostly, for-profit Information and Communications Technology (ICT) entities in/from economically stable countries produce AI tools and plug-ins, presently. White or Savarna (oppressor caste) cisgender-heterosexual, ableist, privileged class men mostly start, own, and run these organisations and largely employ similar men. Usually, these persons have the required social capital, financial wherewithal, and technical education necessary to obtain and sustain ICT jobs, creating AI apps/tools and plug-ins.
Many of these socio-economically advantaged persons are biased and ignorant about communities and identities marginalised by gender, sexuality, SOGIESC, race, ethnicity, region, caste, religion, class, disabilities, language, occupation, age, etc. Consequently, textual and graphic information provided by AI apps and plug-ins has insufficient inclusion and diversity. This is an algorithmic bias rooted in personal bias. Further, AI tools lack sufficient features/options to portray marginalised identities visually/graphically. This proves that exclusion is systemically enabled and technologically perpetuated.
Technological Evolution or Dangerous Regression?
Earlier, AI was a fuzzy concept with limited usage and availability. When Ritash studied AI in the 1990s, it was linked to Robotics and Decision Support Systems (DSS). However, now it is intriguing, addictive, ubiquitous, and seemingly omniscient. Ritash tries to avoid it as they hardly need it. While AI isn’t bad, extreme and unchecked usage and reliance on it have become widespread with unavoidable consequences. This unsavoury fallout of misusing AI pertains to the misrepresentation of facts, manufacture of data, and violation of social and professional ethics. These are among the reasons why the usage of AI is being regulated, especially in research, news media, and healthcare.
Comparison and speculation on using AI instead of books and search engines as a similar technological shift in the case of cameras replacing painting, and the problems associated with the ethics of data and generation are very different. There is also a layer of white centric oppression and projection of homogeneity with the use of generative AI on many levels. From image-based outputs to DSS, it creates ease and comfort for the identification of individuals who do not belong to certain binaries or majorities and places them in a space of danger. It also gives us an idea of how many times the decision makers come in with ableist and white-centered perspectives, and thus language change and accessibility are always applied to technology as an afterthought and not a compulsory standard for a product to be usable.
Underrepresentation of Intersectionality and AI as a Socio-technical Tool
Before AI, too, life was difficult for most minorities. While reading Sasha Costzana-Chock’s paper, Lipi learned of ‘microaggressions’ – a subtle technological/physical shift that briefly questions your individual positionality in the world. As brown people, we experienced this while learning MS Word (in schools during the 2000s), where typing our full names would display a red ‘error’ squiggle underneath. Sasha describes an airport millimeter scanning machine humiliating a transwoman, by flagging an error as its binary, cis-normative coding de-recognised her transitioning body in 2010. Now, imagine how much gen AI can monitor our bodies.
Software programming/coding languages have existed for nearly 150 years. Among the different coding languages, we know, HTML, used for websites, seems the simplest. But to understand HTML, one must know English well. Hence, Software development remains in circles of people who have access and the means to learn not only the methods of code, but also have a great deal of understanding of English. This, indirectly, step by step, adds a layer to how language in itself becomes a tool of oppression, along with the added layer of technology in which it is coded in. In 150 years, how could there be little to no developments in indigenous ways of producing code for technological development in minimum single indigenous language? The other aspect of language that plays a huge role in misconceptions and missing information regarding technology is the over-complication of terminology used in tech. So much so that people give up understanding the tools they interact with daily. This causes serious data theft threats and frauds personally, and immense data is being collected from interactions with these AI tools that feed our algorithms with white-centric, majoritarian and damaging ideologies, risking technological colonisation.
In the book ‘Against Reduction Designing a Human Future with Machines’ in the chapter ‘Making Kin with the Machine’, Jason Edward Lewis, et al discuss inviting AI into the ‘kin’ of human life, following the co-existence and righteousness from many different perspectives of many indigenous people and communities. It discusses the fears and possibilities of inviting AI into our lives and accommodating it. Human experience and intelligence make AI what it is. Humans are selfish in recognising other intelligences in our surroundings, but treat their own lived experiences and thinking skills as the highest intelligence; much of this pride is linked to taking agency and communicating our intelligence in languages people understand. Yet, why is AI a trusted companion recently? What are AI companies trying to sell us? Well – certainty. As humans fear uncertainty. Of a product’s success, grades, advertisement campaigns etc. One would consider OpenAI, among the biggest AI companies, would use their own ‘certain and confident’ AI models for campaigns. But the latest advertisements, shot on 25 mm film by one of the best actors and cinematographers, say the opposite. It makes you realise and question their lies.
No intelligence and algorithms can beat human connection and genuineness.
Ristash is a gender fluid, digital Queerosaur and LGBTIQAP+ peer supporter who enjoys penning verse and camera tricks.
Lipi is gender fluid, belonging to the LGBTIQAP+ community, ux designer, researcher, and illustrator.
Trending Posts