The term "Artificial Intelligence" dates back to at least the 1950s, and has been used to refer to a wide range of technologies, many of which otherwise have very little in common. While this broad category has traditionally included a number of highly useful techniques, its modern usage often refers specifically to Large Language Models (LLMs) and so-called Generative AI (genAI). Due to the term's ambiguity and the controversy surrounding modern trends in AI, we aim to provide a policy which distinguishes between these uses and highlights exactly which applications we consider inappropriate for FluConf.
Members of our community voice well-justified concerns about the newer definition of AI. It is premised on non-consensually harvested data, labeled by workers who are not fairly compensated, with little transparency from the corporations that engage in these practices. We reject the notion that an AI model can reasonably be considered open-source without the availability of its training data, as has been proposed by the Open Source Initiative. The scale of such models makes it such that individuals without significant monetary and computational resources cannot meaningfully evaluate their qualities and behaviour, rendering the freedoms afforded by supposedly open-source AI a mostly abstract notion.
Likewise, these systems have a considerable impact on the environment, directly or indirectly increasing demand for fossil fuels and exacerbating the climate crisis, drawing considerable amounts of clean water in regions that are already facing droughts, requiring an increase in the production of microchips that include rare minerals, and producing dramatic amounts of e-waste due to the intensiveness of the model training process. Perhaps of even more concern is the fact that these resources and processes are being dedicated towards software products that are less reliable than than their predecessors.
Much more could be said about the real and potential harms of these technologies, however, it is probably sufficient to state that we are uninterested in providing yet another uncritical platform for the promotion of implicitly corporate technologies. We encourage those that are considering submitting such content to do so elsewhere.
Our internal use
Our stance is simple.
We will not use LLMs to summarize or review any content submitted to FluConf, nor will we knowingly use any assets generated with such technology.
In the unforeseen event that we require something that might be considered AI, such as Optical Character Recognition, automated audio transcription, or similar technologies, we will disclose and describe its use.
Content moderation concerning "AI"
We will consider proposals that do not disclose their use of LLMs or generative AI outputs to be plagiarized. The use of such content is strongly discouraged even if it is disclosed.
We are willing to make exceptions under a limited set of circumstances, such as if your inclusion of AI content specifically relates to the topic of your submission and provides necessary context. For example, we concede that research into poisoning attacks or model extraction might be supported by relevant media, or that attempts to measure bias in popular AI models could benefit from examples of their output.
We do welcome submissions related original research or development into AI which otherwise intersects with FluConf's stated topics. For instance, small-scale models that can be trained and/or run on consumer hardware are conceivably of interest to our community, particularly if they can be trained with consensually-sourced data and without any reliance on networked resources.
We encourage all authors to reflect on the potential harms enabled by their work, regardless of whether it intersects with "AI". Software embodies the biases of its creators, and we consider it vital that we acknowledge and reflect on which biases we hold.
To that end, we happily refer readers to several useful resources on the risks of systems that process data and automated decisions with or without human oversight:
Data Feminism, by Catherine D'Ignazio and Lauren F. Klein
The Data Hazards project, jointly led by Natalie Zelenka and Nina Di Cara