Approach AI with an open mind

Hydroinformatics knowledge exchange meeting on Generative AI

Several water utilities and KWR are already researching and experimenting with generative AI. Their initial findings and insights were discussed at the first Hydroinformatics meeting in 2024. The participants concluded that every organisation would be well advised to have an AI data scientist and AI policy, and that it is important to approach AI with an open mind, and not solely from a risk perspective.

The emergence of ‘Artificial Intelligence’ (AI) is moving apace. In recent years, generative AI has made major advances. This technology makes it possible to communicate with computers in natural language (for example via ChatGPT) and it promises to put application programming and data analysis within the reach of much larger groups of people. On 26 February, the first Hydroinformatics Knowledge Exchange Meeting in 2024 focused on the theme of Generative Artificial Intelligence (AI). KWR organises these meetings on the basis of the water utilities’ Joint Research Programme (BTO) with the aim of giving water utilities the opportunity to exchange practical experience with one another. The discussions covered what water utilities can expect from generative AI and what risks are involved

Always check information generated by AI

More and more data are becoming available digitally. KWR researcher Xin Tian explained that, with the rapid rise of AI in recent years and increasingly powerful computers, AI is increasingly able to generate useful information from raw data. Generative AI of this kind can convert text (including numeric data) into audio, video, images and programming code. The advent of Large Language Models (LLM) has further accelerated this process because LLM can generate new texts from existing literature. Xin did warn that texts generated in this way should not be accepted uncritically: “Always take a careful look at AI products; don’t accept the information blindly and make sure you check it elsewhere.

Working with relevant applications yourself

Alex van der Helm (Waternet) gave a presentation about developments with generative AI at Waternet. He began by explaining what AI is and how it can recognise patterns, make predictions and, since early this year, reason logically. He took a closer look at generative AI and discussed the limitations, risks and opportunities AI affords. He also described how AI is being used for specific purposes at Waternet, such as predicting the wastewater flow to the Amsterdam West wastewater treatment plant and using AI to respond to lower emissions of nitrous oxide (a strong greenhouse gas) at the wastewater treatment plant.

Waternet conducted a survey of algorithm use. They identified fourteen algorithms/AI systems. Most of them were developed in-house by Waternet’s DataLab. In response to the increasing attention and possibilities for use, they are going to develop policies for the use of AI at Waternet. The survey indicated that AI is generating the following benefits and concerns, among others, for water utilities:

  • Support for the work
  • Building up knowledge about AI
  • Exploring opportunities
  • Focusing on risks relating to personal data, business-critical data and accuracy in LLMs
  • Explaining AI models

General developments in the field of AI are moving very quickly. However, in the case of specific applications for the water sector, Alex says the water companies themselves need to get to work because they themselves know most about their own processes.

Figure 1. The development of AI (source: WRR (2021) Opgave AI. De nieuwe systeemtechnologie)

Three use cases with ChatGPT

Short use cases were then discussed relating to other applications of generative AI. Thomas Hes, a consultant on data-driven sustainability at Waternet, kicked off with a presentation about using ChatGPT to determine the environmental impact of catering at Waternet. Every product has an environmental impact but the data to calculate that impact were incomplete. ChatGPT was asked to provide the required masses and food categories based on product names. It proved difficult to formulate a good question. In the end, ChatGPT was also asked to do that as well, after which things improved. Even though the result was not error-free, using AI saved a lot of time in the analysis of the environmental impact.

Xin Tian (KWR) presented a use case relating to the conversion of unstructured data in text into structured data in the form of a table. The advantage of using LLM instead of Natural Language processing in this text mining approach is that LLM does not require extensive training for the models. Tian also talked about the development of an AI chatbot that can classify client conversations.

Dennis Zanutto, KWR researcher and PhD candidate at the Politecnico di Milano, described the use of ChatGPT to translate a survey with responses and then to draw conclusions. Zanutto also gave an example for creating programming code and a literature review using ChatGPT.

Nicael Jooste (Dunea) described how a proprietary, closed version of ChatGPT provided Dunea with support for an analysis of a large amount of its own operational plans. Here, Chat GPT helped to review plans in terms of data maturity, appointing a ‘data steward’, which CPIs are listed, and a check for the presence of topics from the management assessment. This saved employees a lot of tedious work in the areas of searching and questioning. One challenge was that ChatGPT is not familiar with abbreviations. One option is to make certain types of language use mandatory in operational plans.

Human-Centred AI is a necessity

Jie Jang, an assistant professor in Web Information Systems (WIS) at Delft University of Technology, is a researcher in the field of AI technologies. He talked about Human-Centred AI and he had a question for the audience: Can LLMs understand language? There were two positions here:

  1. Computers can understand human language because they understand online chats.
  2. There is a barrier between language and its actual meaning.

Jie Jang talked about the importance of robustness in AI, and the need to include the context of a system. That is why it is important to always include human judgement in the approach. Human-Centred AI therefore involves the development of practices and practical tools that involve humans in calculations to develop reliable AI systems that can support human activities. Professor Jang’s laboratory is currently working on the ‘Genius’ tool that is integrating language, knowledge, and interaction with people. Turning to the question of whether we can safely outsource tasks to machines in the future, Professor Jang did not think we have reached that stage yet.

Figure 2. People must be part of the use of AI (Source: presentation Dr. Jie Jang )

Approaching AI with an open mind

During the closing discussion, the participants concluded that every organisation would be well advised to have an AI data scientist and an AI policy. For successful AI applications, it is important to approach AI with an open mind, and not solely from a risk perspective. Data quality is the main obstacle to the adoption of AI. ‘Vendor lock-in’ is a threat to using AI well. There are probably not yet many applications designed specifically for drinking water utilities but those that are already available can certainly help to make the job easier.