The future is brAIt – impressions from World Summit AI and opportunities for the water sector

It is in October that 2500 AI enthusiasts and specialists, academic but in particular also business-oriented, meet at the World Summit AI in Amsterdam each year. Its 2019 edition taught us about the responsible use of AI (see this blog post), and the 2021 edition included a special session on addressing our environmental challenges (see this blog post). If there is any central theme to be distilled from the 2022 edition, it seems to be the continued conviction that pervades the speeches of most presenters that AI is here to change our lives and our planet for the better. We, two members of KWR’s hydroinformatics team, participated in the conference on consecutive days. This blog post summarizes our combined main impressions and gives a reflection on their relevance for the Dutch water utilities.

Day 1

After the opening presentations (which included some attention to the global fight for women’s rights), a panel with renowned experts discussed the possibility (and indeed the ascertainability) of  sentience in AI. There is a debate going on in which opinions range from “AI is just statistics” to “sentience has already emerged in AI”. Sentience seems to be difficult to define and test for, more so than for intelligence. We tend to be misled by anthropomorphic tendencies; a good theory of what constitutes conscience is needed. One of the panelists argued that the question is in any case not really relevant, as AI is about applications and solving problems. However, there may also be a spiritual element in humans creating sentient AI. Most workers seem to focus on specific technical problems in AI rather than creating a general AI, but in essence, the types of problems to be solved are very similar. Philosopher Daniel Dennett famously stated that a lot of AI is about building tools, not about building colleagues. However, the neuroscientist in the panel ascertained that creating a general AI may help us to shed light on the question what it is to be a sentient human.

This panel discussion was followed by a rather sobering presentation by Ian McGilchrist, who is a psychiatrist and neuroimaging researcher. He argued that mankind has moved towards preferring our left brain hemispheres over our right hemispheres. Both hemispheres are used in many tasks, but whereas the right is capable of experiencing the presence, of seeing potential, and is flowing, the left is static, building on a representation, and prone to lying and manipulation. So how do we develop and use our AI to associate with both hemispheres in a balanced way? AI can be an angel (potential to help improve our world) and a devil. Automation can displace humans, strip us of our skills, affect the resilience of our society, and be a very powerful tool in the hands of dictators. The AI genie is out of the bottle and cannot be put back (unless our civilization breaks down, which is a distinct possibility). AI needs to be used to further the working of the right hemisphere by selecting the appropriate projects, but also by letting go of control, bureaucracy. This requires trust.

After this presentation, the contributions became both more practical and more visionary (or gospel). AI and data  help to feed large student populations in India, protect coral reefs, reduce carbon emissions from the heating of buildings, predict the folding of proteins, help the Mars rovers to navigate, and control plasmas in experimental nuclear fusion reactors.

One of the afternoon parallel tracks was about the Metaverse. We can describe this as a 3D immersive extension of our current world wide web and is therefore seen by some as the next iteration of the internet. For now, due to limitations in hardware and network capacity, the experience is low-fidelity. Also, there are currently very few good use cases for the metaverse, but there is a huge perceived business potential. The role of AI in the metaverse is in the generation of environments, manage data flows, privacy, etc. But there remain many questions and challenges relating to interoperability (there is no single metaverse but multiple disconnected ones), privacy, responsibility, security, governance, etc.

As we are also experiencing at KWR right now in one of our research projects, VR can greatly enhance person-to-person interactions at a distance, giving a much better experience than 2D videoconferencing can provide. Call it Metaverse or not, VR will for sure provide many opportunities to the water industry for interactive meetings, training, infrastructure design, exploration of complex datasets, etc., if (or when) the technical hardware and bandwidth limitations can be overcome.

Day 2

‘Tomorrow’ and ‘Connection’ are the main theme of the second day. Inspired Minds first shared their stories of providing humanitarian aid to Afghanistan and Ukraine. AI, as an effective means of enhancing people’s connections, is also expected to make our future better. Joseph Bradiey from Tonomus shared his opinions on how to orientate AI in an AI-aided society. ‘Inclusive’ was his main message from him. Either metaverse or mixed reality can inclusively complement our digital and real experiences. With the rapid development of mixed reality immersion, we, as an inclusive part, should benefit much more in life, for instance, being more efficient to find the best expert in the world for any particular problem. But once the interaction between humans and AI happens, legal and ethical issues also rises. The examples, shared by Jamie Susskind, talked about “norms”. Rules and norms restrain self-driving cars not to over-speed or auto-ban someone to publish extremely irritating messages on Twitter. But a key question of mine is: who are the individuals or organizations to make such rules? And as AI dives into every corner of our daily life, are we even able to regulate every aspect about do’s and don’ts? This might be a question left for police-makers to carefully consider and discuss. But fortunately in the EU, as discussed in the panel discussion led by the director of AI of the Ministry of Justice and Security of the Netherlands. The joint force has been and will be further implemented to quality risks potentially caused by AI and accordingly make rational policies to avoid or mitigate such risks. GDPR is apparently one of the representative examples of preventing people’s privacy in AI applications.

Every small step AI has made is strongly related to research. Not only in medical or linguistical applications, but AI can be found nowadays almost in every industry. The presentations in the afternoon, from the Track AI in Science, covered topics about climate change, space science, traffic, supply chain, and water as well. AI was showcased to reduce food waste, and many presentations are concerned with the lifecycle of machine learning models, especially about the deployment of machine learning models in productions. Note that ML models are not a collections of conventional models that only need to be built once and can be used for good. Instead, ‘human-in-the-loop’ is one of the key aspects to consider for ML models, in terms of data collections (e.g., citizen science) or model (re-)training and (re-)validation (e.g., active learning). Importantly, this needs us to consider AI and particularly ML models in their lifecycle for our BTO projects. As a research institute aiming to bridge science to bridge, KWR is excellent at developing a variety of ML models to address different water-related problems. But we also need to notice that some models are not ready to deploy in practice and possibly not reusable for future projects. To develop an ML model without deployment is like we have prepared a collection of words and do not know how to put them together into a meaningful sentence to deliver messages. To make both KWR and Dutch water utilities benefit from the profound development of AI, both sides should join forces to achieve this goal of the model deployment in the upcoming projects involving AI.

At the end of the informative day, it is good to feel that it is still a group of real people, rather than AIs, sitting in the meeting room, talking about innovative developments, and envisioning the future. We are special because we, as human beings, are concerned about tomorrow and take strong responsibility for how to improve tomorrow. However, we are also not special since we are just an equally inclusive part of tomorrow.