
By Kenzie Love
The problems with AI are well-known, and the promise of this technology is likely overstated. Thus as David Montgomery observes, “it’s risky to leave it in the hands of distant corporations.” As in other instances, however, the work being undertaken by co-ops may offer an alternative to the current domination by a few big players.
Sarah Hubbard notes that “the risks associated with concentrating AI development in monopolistic powers are generally well understood and have been widely discussed in existing literature.” Hubbard further states that “it is critical to explore methods for greater collective community governance over the development and deployment of these systems,” which is what spurred CWCF member co-ops Hypha and CanTrust to collaborate on assessing the feasibility of developing an AI stack. With funding support through Co-operators’ “Co-operative Development Program”, the project is in its early stages but would represent a departure from corporate-owned AI by giving users the autonomy and privacy those platforms currently lack.
“We’re considering what a cooperative values-driven AI would look like,” says Hypha’s Andi Argast. “We haven’t really seen anything out there like that. There are many conversations about responsible AI and ethical AI, but those terms mean different things to different people. We want to explore what values mean in practice versus theory.”
The soon-to-be-launched project offers a chance to put the co-operative values into practice. In contrast to the big platforms, the stack would include a locally-run model where the user can control the data exchange and the data stays within a closed ecosystem, Argast says. She offers an example of how this could work using RAG (retrieval augmented generation) whereby the large language model (LLM) has access to subject-matter specific information that’s not included in the original training data (i.e. client files or other similar organization-specific information), which could help co-ops improve their internal knowledge sharing.
“Say you’re a co-operative that’s got 15 years of files on particular clients and you’re working on a client that does this particular type of work, or works with this type of product,” she says. “And if you know that 10 years ago, you worked with somebody like that, you could ask it questions, and then it could go and find the information about that without sharing your data with any external sources.”
While this is a seemingly modest use of the technology, it offers a chance to dial down the hype and explore how, or indeed, if AI can in fact have a positive impact, something that is often lost in the current conversation.
“I don’t really find AI either exciting or scary,” says Argast. “I think AI is, at this point in its evolution, a tool for automation that makes some things easier but also has some concerning implications; as do most pieces of technology. My bigger concern is the amount of transformative narrative that people subscribe to, how revolutionary they feel it might be, and whether this excitement is warranted. AI definitely has some interesting features and uses, but separating hype from reality, and exploring what different development and ownership structures look like, is key.”