Throughout modern history, waves of new technology have transformed society in profound and often unforeseen ways. And when – as with the railway boom of the 19th century, the motor car in the 20th and the internet revolution at the turn of the 21st – those technologies were poorly regulated and driven by financial speculators, these transformations often ran out of control, causing widespread unease.
But the rise of artificial intelligence threatens a potential shift in civilisation that dwarfs those of its predecessors, with the prospect of mass job losses, business failures, the use of online bots and deep fakes to subvert democratic processes and the potential for abuse by organised criminals and rogue states.
In March, a group of experts co-signed an open letter to world governments calling for an immediate pause, lasting at least six months, on all training of AI systems more powerful than GPT-4.
They warned: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Two months later, another group – including some from the first letter – issued a declaration that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
It’s clear that AI promises huge benefits, too, by liberating workers from menial tasks and driving innovation in fields such as medicine and climate science. Can the democratic ownership model promoted by co-ops help to avoid the mistakes of past tech revolutions, keeping the dangers of AI at bay and harnessing its advantages?
Introducing Cosy AI
One co-op project working in this field is Cosy AI, “a project to create a co-operative framework for the ownership, development, and use of AI that you can trust”.
Drawing on the lessons of the platform co-op movement, it is building a coalition of organisations, businesses and individuals with a commitment to co-operation.
It is looking for “a wide range of co-operative practitioners, community builders, and organisational designers as well as AI experts, researchers, software developers, and others who share a deep concern for a safe and sincerely beneficial use of AI technologies”. Already on board as coalition partners are Platform21 (lead partner), Platform Coops Lab (oversight partner), DisCO.coop (Oversight partner), DataUnion, CoopTech Hub, Transekt Agency, Greaterthan Collective, Sinnwerkstatt, Mindlab Institute, Startin’blox, Quriosity Studio, DeCiLo, Evolutesix and Cosmos Co-op.
Related: How are co-ops dealing with AI and the retail revolution?
Its goals are to develop common standards for the safe use and development of AI – based on a co-operative mindset; easy access to safe AI tools – including its own apps; and the creation of a digital safe space that protects all members from potential pressures, fake information, manipulation, and data extraction from AI.
As a first step, it aims to build a CosyAI Suite of safe AI tools that meet commonly defined safety standards, co-operatively owned with fair compensation for contributors.
This would be structured as a co-operative at EU level (SCE), with open membership to people from all around the world – followed by more further co-operatives in different regions.
Co-operative entrepreneur Felix Weth, who is leading the project, says it is a response to the way “AI is advancing so fast, and in my view really is going to change quite substantially the way that we do business”.
Bringing co-ops on board
He says he is “a little shocked by how difficult it was to unite co-ops on this topic, to find common ground there”. There was positive response by some specialised tech co-ops, and “a few very active people who have also been embracing the web 3.0 blockchain crypto technology”.
But he was disappointed that the topic did not receive more attention at the International Cooperative Alliance research conference, held in Belgium in July. “My impression, over all, was that people have been kind of curious and some extent open but no one was prepared to take any substantial steps to embrace the technology and see what, together, we can do about it.
“So now my small company started developing an organisational knowledge tool. We’re also now looking towards customised AI systems.”
Cosy AI already has a pilot customer – freelancers platform co-op Smart Germany. “They are interested in testing an AI assistant for themselves, in particular for doing customer relations. They have quite complex requests from the members and dealing with them can be partly automated.”
Weth thinks there is a lot of “low hanging fruit” in the co-op movement, with scope to improve internal processes and communications in smaller businesses that are only now beginning to digitise.
Related: Co-ops, generative AI and the creative industries
AI tools would act as “an extension to current staff, so that they can outsource some of their recurring tasks”. These tools can be specialised using data from a co-op “so you avoid some of the shortcomings of the current learning management systems (LMS) that hallucinate and make up information”.
Such applications can free up financial resources within capital-starved co-ops, he says, allowing core team members to focus on expert work and leaving menial tasks to AI. This includes email shots to customers, members and potential b2b clients – potentially drumming up lucrative business for little outlay.
“If you have a co-op that is already digitised,” says Weth,“the cost of setting up a specialised assessment is actually not so high. For €2,000-€3,000 you have basically a specialised employee that can do a lot of work.”
Weth is also interested in the question of whether co-ops can offer a path forward for AI to be used safely and responsibly. He’s impressed by moves by the US Congress and the EU to discuss the situation with a view to developing legislation but warns that the level of financial investment in AI, coupled with improvements in computing capacity is leading to a massive exponential growth in its capability.
Saving the world?
“We’re now at the turning point where we really have to ask, can we handle this well, in terms of politics and also the organisational models? I feel co-ops actually have a role to play.”
He cites the work of AI researcher Yoshua Bengio, who co-signed the two global warnings of the threat of AI and has also suggested democratic structures to control it. Issues highlighted by Bengio include the implications of AI for minority groups.
“Modern AI systems are trained to perform tasks in a way that is consistent with observed data,” Bengio wrote in the Journal of Democracy. “Because those data will often reflect social biases, these systems themselves may discriminate against already marginalised or disempowered groups.
“These issues are far from being resolved as there is little representation of discriminated-against groups among the AI researchers and tech companies developing AI systems and currently no regulatory framework to better protect human rights.”
Bengio also highlighted the threat to democracy, warning: “In the absence of regulation, power and wealth will concentrate in the hands of a few individuals, companies, and countries due to the growing power of AI tools. Such concentration could come at the expense of workers, consumers, market efficiency, and global safety, and would involve the use of personal data that people freely hand over on the internet without necessarily understanding the implications of doing so.
“In the extreme, a few individuals controlling superhuman AIs would accrue a level of power never before seen in human history”.
This is an unintentional harm of AI; Bengio warns it could also be used for deliberate harm – for fraud, cyberattacks, to corrupt elections or to launch bioweapons – or could itself go rogue, perhaps by developing “a strong self-preservation goal, possibly creating an existential threat to humanity”.
Regulation can only go so far in tackling such threats, he warns, as it would not be equally applied around the world and would be disregarded by criminals. Other responses could include international co-operation to develop defensive AI systems to counter threatening AI.
These labs should be publicly funded but non-governmental and non-profit, argues Bengio. This would avoid the “conflict of interest between commercial objectives and the mission of safely defending humanity” and also prevent governments from abusing any developments for their own ends. “An appropriate governance structure for these labs must be put in place to avoid capture by commercial or national interests,” he writes – and if these labs made advances with safe, beneficial applications, “the capabilities to develop and deploy those applications should be shared with academia or industry labs so that humanity as a whole would reap the benefits”.
For Weth, “that sounded to me like there could be a good conversation in terms of what noises can we bring from the co-op scene – in particular from platform co-ops – to that table?
“There is a potential for collaboration just by having responsible governments, a co-operative framework for research labs, and multi-stakeholder ownership. I don’t want to go too deep, but I feel like there’s also real potential to use some of the experience and organisational knowledge that is in the community.”
This may require more openness by policy makers towards democratic ownership, Weth thinks. “It’s almost a universal agreement that the ownership model of big corporations is not ideal for this technology.
“Everyone talks about the AI ‘arms race’ and the ‘gold rush’, and how that leads to no good, so I feel like there’s more and more consensus on the need for this alternative. And I’m just thinking, well, how can we position co-ops as at least a contributor to the discussion? Maybe not the solution, because it’s complex – with the geopolitical aspects, and whole question about hardware development and who owns the GPUs?”
Weth is now looking to the upcoming platform co-op conference in Kerala, India, at the end of November as “the next step to involve more people” in Cosy AI and “find a way to set up that core, find the right partners”.
His intention has always been for a co-operative effort and “not just me convincing other people”. Discussions have brought “some interesting paths forward and the next step is to enlarge the coalition – “so there will be more co-op members who sign up, and then also AI experts, which we haven’t reached out to yet. I wanted to have a good base in the co-op world before reaching out to the experts.
“I feel now the conference in India is the moment where we can do that. And once we have this combination, I’m still convinced that a co-op at the European level can be a good framework”.
One goal is to raise the capital to build up Cosy’s own data centres. “In the short term, it could be data pooling for co-ops and other organisations who feel they want reliable partners” rather than state-of-the-art development or AI training.
“So we won’t be at the forefront of research but we can, at least in terms of application, offer tools that are better when we have sorted out the ownership and data security.”
Next generation
The co-op movement could also address its issues with attracting younger members, he thinks. “In Germany a lot of the co-ops say, how can we reach out to younger people? And if they could embrace this technology for that purpose, that could actually be an interesting way forward. So I feel like there’s really big potential.
“I also believe community is going to be a lot more important in the future. Given the insecurity that all this generated content is going to create – generated videos, deep fakes etc – there’s going to be, I think, a re-evaluation of the value of personal connections. The thought that ‘if I really know someone in person, I trust that person’. I feel co-ops can rely on the communities they’ve built and the infrastructure that they have.”
This organic appeal could offer a huge potential for the next generation of co-op growth, he thinks. “Co-ops are existing networks of people. They are membership organisations rather than just companies with customers. This can also be interesting to young people; it’s community building, with offline events.
“Although it’s not AI and not digital, I think it’s actually a good complementary strategy to combine offline community building with embracing AI tools – and not being overruled by these technologies, but rather, having a strategy using them.
“We had a small co-op summit here in Germany and someone called it ‘islands of trust’. I think co-ops can be islands of trust.”