AI has been around for a while now but its applications are still very much shrouded in hyperbole and myth. To be sure, important advances are taking place today, pioneered by people like Demis Hassabis, the lead at what is now Google’s ‘Deepmind’. However, much of this work generates both fear and tremendous expectation by the general public, due predominantly to a lack of knowledge and a lifetime of science fiction consumption.
Many questions around AI have become commonplace: When will it reach its potential? Should we fear it? Will it destroy or generate employment? What positive implications and uses can it have?
The narratives around AI as a concept are confusing due in part to the lack of a standard definition beyond the one proposed by Max Tegmark: “non-biological intelligence.”* The vagueness only leads to more questions: is AI the ability to make complex calculations? Is it the capacity to imitate human beings, perhaps even to the point of independent thought?
The answer is hotly debated. People usually associate the term AI with a machine that mimics the cognitive functions usually ascribed to human beings, such as reasoning or learning (even writing now). It may be that herein lies the greatest fear about AI: whether this ability to imitate or emulate could substitute humans in all areas. This question, already posed by the world of sci-fi, is curiously simple to answer: no. At the moment, artificial intelligence is still far from replicating or improving upon various aspects of humanity: creativity, emotional intelligence, interpersonal communications. Even genuine understanding is (so far) beyond its capabilities (Moravec Paradox).
Nevertheless, even knowing that these capabilities cannot be imitated, there remains hesitation around AI. For one thing, many jobs have disappeared due to the application of artificial intelligence. However, as in all previous industrial revolutions (in this case, digital), the belief is that new jobs will arise, alongside new ideas such as Universal basic income (UBI) currently being trialled in Spain.**
For another is the potential for adverse outcomes due to the use of a biased algorithm. Like all computing, AI algorithms are coded to receive data, process it, and generate results. Bias is added when processing the data; we decide what the algorithm should “value” when giving the result – e.g. age, education, skill, as well as ethnicity, gender, or business and financial criteria of the company.
With regards to the last criterion, we have already seen cases where the algorithm value workers as “disposable components,” favouring part-time employment instead of investing in worker qualification and long-term benefits. AI has yet to fully understand the lengthy and often painful history by which worker’s rights were protected in law.
AI’s application to social impact
Despite these concerns, we still wonder: can AI improve social Impact? The answer is a resounding yes. AI can develop better responses and means to address major social problems. An automated artificial intelligence could allow us, for example, to separate the waste for recycling in a much more efficient and precise fashion.
Through deep learning, we could predict natural disasters, reduce the emission of polluting gases (either through its measurement for decision making or the correct use of a heating system in a building), measure biodiversity and even develop and apply the best treatment for a disease. Indeed, the possibilities are vast.
Ensuring AI is responsible
Paradoxically, the possibilities for AI are limited only by our human creativity, imagination and above all responsibility. Responsibility in AI implies transparency (to understand the logic of algorithms and ensure its objectives are publicly available), that it’s socially beneficial (i.e. its objectives are aligned with social needs) and is safe, meaning that it cannot be hacked for malfeasance.
At BothOfUs we have already adopted a few AI applications for social impact. For instance, we are using AI to calculate air pollution. CO2CO2Puff is a smart-machine-learning algorithm map, which uses real-time data to calculate air pollution on a micro level. First we look at traffic patterns of congestion, street structure, weather, sunrise, cloud coverage, population, atmospheric stability and other factors.
We then compare these patterns to local air quality and wind forces. The map can provide real-time information and future predictions on pollution levels of an area or even of a single street where the air measurement stations are not available. Our goal is to help city planning endeavours and business environments explore their air quality by providing the best data and reports.
The importance of community and support
AI is also used to improve health outcomes, reduce stress, and for advancing treatments for disease. However, equally (or perhaps more) important to the actual application is the community that works together to help developers create the solutions. Take the gender-inclusive society Women in AI which raises awareness on gender diversity. These communities are central to any AI application designed to better society.
Of course, applications and communities cannot always devise social-impact-based AI solutions alone. Help is sometimes needed. That’s where concepts like AI for good, a university innovation department in Scandinavia comes into play, providing grants for research and guidance. Sweden went a step further, creating AI Sweden which establishes partnerships between smaller companies working in AI and corporates, all in the name of benefiting society.
At the end of the day, the question is not whether AI can benefit society, it’s how soon can we get started and how do it as safely as possible.
*Tegmark, Max (2017). Life 3.0 : being human in the age of artificial intelligence (First ed.). New York: Knopf. ISBN 9781101946596. OCLC 973137375.
** UBI was initially introduced in Spain as a COVID-19 measure but is now being debated for its long-term merits in society as a means of redistributing both wealth and that most important of resources; time.
Andrés Chamorro is the Business Development Manager for Spain for BothOfUs. Whilst relatively new to the team he is an expert in political science and extremely motivated and dedicated to social impact.
I think the main issue is, as always, that AI is suppose to be a tool. When we use it as such, we can make tremendous progress. When we abdicate the use of the tool for the ‘fun of the tool’, as in how far can we take this, we have lost the ethical high ground. Our social enterprise is using machine learning (part of AI) in our work to do predictive analyses. Why? Because when we improve decision-making in the non-profit sector, we become more efficient at mundane, time-consuming tasks (think unsuccessful grant proposals) allowing more time to do the important human work – connection, service, solution-finding. AI (and ML) makes that possible.