Will 2024 going to be explosive for AI
Examining Specifics
Governments and businesses have released innumerable lists of ethical guidelines and tactics within the last ten years. However, as a result of ChatGPT's introduction in 2023, things have solidified, according to Inioluwa Deborah Raji, a technologist with the Mozilla Foundation, a global charity dedicated to internet freedom. "It appears that there is at last this change toward concreteness. That, in my opinion, is long overdue.
Raji hopes that stays the case. "I believe that things will get even more specific. If we reversed any of the gains we gained in 2023, that would be regrettable.
But according to Raji, the policy response has been unduly centered on generative AI since it was sparked by the launch of OpenAI's ChatGPT. "Face recognition, risk assessment, and even some of the online recommendation AI technologies underpinning various platforms have received very little attention."
Thankfully, an Executive Order signed by President Biden directed government entities to create AI-related plans. The Department of Health and Human Services must, for instance, produce a plan outlining how AI will be used to improve public services and benefits. Raji contends that this kind of meticulous, unglamorous effort is necessary. "Perhaps the domain-specific regulators and the agencies will have developed a little bit more awareness by next year."
An Expanding Gap
Approximately 2.6 billion people, or one-third of the global population, according to estimates from the International Telecommunication Union, do not have access to the internet. Bolor-Erdene Battsengel, a researcher at the University of Oxford and a former vice minister of digital development and communications for Mongolia, is concerned that this digital divide may define who can profit from AI. There are many disparities in our society today, including those related to gender, money, and education. It will be impossible to close the inequality gap if we include the digital divide.
According to Battsengel, AI is rarely created with users in developing nations' requirements in mind, even in cases when they can use it. The engineers creating the technology, or writing the algorithms, are primarily from the United States or other industrialized nations. According to Battsengel, the wealthier nations' response to the development of AI has been insufficient thus far. "As of now, I haven't seen any initiative from AI's primary stakeholders to ensure diversity and equality. It is my sincere hope that there will be.
The harm that AI-generated misinformation could bring to democracy is arguably the most concerning: Elections in Bangladesh are allegedly already being interfered with, and 2024 is shaping up to be the most important election year in recent history. According to Battsengel, "Deepfakes will be used enormously, adding to existing misinformation and disinformation." "What is the technological means to prevent that, or at least recognize that it's a deepfake?” is something I truly expect to see from the primary tech stakeholders.