Open neural networks: the intersection of AI and web3

Source Node: 1683067

by Rishin Sharma & Jake Brukhman.

Special thanks to everyone who gave feedback on this piece, including Nick Yakovenko, David Pakman, Jan Coppens, AC, Evan Feng, Adi Sideman.

Prompt: “​translucent cyborg sitting on a metal throne in a futuristic castle, cyberpunk, highly detailed, sharp lines, neon lights”

Source: AI-generated image from Lexica.art, a stable diffusion search engine

Technological innovation never rests, and this is especially true for artificial intelligence. Over the past few years, we have seen the popularity of deep learning models re-emerge as forerunners in AI. Also referred to as neural networks, these models are comprised of densely interconnected layers of nodes that pass information through each other, roughly mimicking the construction of the human brain. In the early 2010s, the most advanced models had millions of parameters, heavily supervised models used for specific sentiment analysis and classification. Today’s most advanced models such as DreamStudio, GPT-3, DALL-E 2, and Imagen are approaching one trillion parameters and are accomplishing complex and even creative tasks that rival human work. Take, for instance, this blog post’s header image or summary. Both were produced by artificial intelligence. We are just beginning to see the social and cultural implications of these models as they shape how we learn new things, interact with one another, and express ourselves creatively.

However, much of the technical know-how, key data sets, and the computational ability to train large neural networks today are closed source and gated by “Big Tech” companies like Google and Meta. While replica open source models such as GPT-NeoX, DALLE-mega, and BLOOM have been spearheaded by organizations including StabilityAI, EleutherAI, and HuggingFace, web3 is poised to supercharge open source AI even more.

“A web3 infrastructure layer for AI could introduce elements of open source development, community ownership and governance, and universal access that create new models and efficiencies in developing these new technologies.

Further, many critical use cases for web3 will be enhanced by the adoption of AI technologies. From generative art NFTs to metaversal landscapes, AI will find many use cases in web3. Open source AI fits within the open, decentralized, and democratized ethos of web3 and represents an alternative to the AI provided by Big Tech, which is not likely to become open any time soon.

Foundation models are neural networks trained on extensive datasets to perform tasks that would normally require intelligent human behavior. These models have created some impressive results.

Language models such as OpenAI’s GPT-3, Google’s LaMDA, and Nvidia’s Megatron-Turing NLG have the capability to comprehend and produce natural language, summarize and synthesize text, and even write computer code.

DALLE-2 is OpenAI’s text-to-image diffusion model that can produce unique images from written text. Google’s AI division DeepMind has produced competing models including PaLM, a 540B parameter language model, and Imagen, its own image-generation model that outperforms DALLE-2 on DrawBench and COCO FID Benchmarks. Imagen notably produces more photorealistic results and has the ability to spell.

Reinforcement learning models such as Google’s AlphaGo have defeated the human Go world champion while discovering novel strategies and playing techniques that have not surfaced in the game’s three thousand-year history.

The race to build complex foundation models has already begun with Big Tech at the forefront of innovation. As exciting as the advancement of the field is, there is a key theme that is of concern.

Over the past decade, as AI models have become more sophisticated, they have also become increasingly closed to the public.

Tech giants are investing heavily into producing such models and retaining data and code as proprietary technologies while preserving their competitive moat through their economies of scale advantages for model training and computation.

For any third party, producing foundation models is a resource-intensive process with three major bottlenecks: data, compute, and monetization.

Here is where we see early inroads of web3 themes in solving some of these issues.

Labeled datasets are critical for building effective models. AI systems learn by generalizing from examples within datasets and continually improve as they are trained over time. However, quality dataset compilation and labeling require specialized knowledge and processing in addition to computational resources. Large tech companies will often have internal data teams specialized in working with large, proprietary datasets and IP systems to train their models, and have little incentive to open access to production or distribution of their data.

There are already communities that are making model training open and accessible to a global community of researchers. Here are some examples:

  1. Common Crawl, a public repository of ten years of internet data, can be used for general training. (Though research shows that more precise, pared datasets can improve the general cross-domain knowledge and downstream generalization capabilities of the models.)
  2. LAION is a non-profit organization aiming to make large-scale machine learning models and datasets available to the general public and released LAION5B, a 5.85 billion CLIP-filtered image-text pair dataset that upon release became the largest openly accessible image-text dataset in the world.
  3. EleutherAI is a decentralized collective that released one of the largest open source text datasets called The Pile. The Pile is a 825.18 GiB English language dataset for language modeling that utilizes 22 different data sources.

Currently, these communities are organized informally and rely on contributions from a wide volunteer base. To supercharge their efforts, token rewards can be used as a mechanism to create open-source datasets. Tokens could be emitted based on contributions, such as labeling a large text-image dataset, and a DAO community could validate such claims. Ultimately, large models can issue tokens from a common pool, and downstream revenue from products built on top of said models can accrue to the token value. This way dataset contributors can hold a stake in the large models through their tokens and researchers will be able to monetize building resources in the open.

Compiling well-constructed open source datasets is critical to widening the research accessibility for large models and improving model performance. Text-image datasets can be expanded upon by increasing the size and filters for different types of images for more fine-tuned results. Non-English datasets will be needed for training natural language models that non-English speaking populations can use. Over time, we can achieve these results much faster and more openly using a web3 approach.

The compute required to train large-scale neural networks is one of the largest bottlenecks in foundation models. Over the past decade, demand for compute in training AI models has doubled every 3.4 months. During this period, AI models have gone from image recognition to using reinforcement learning algorithms to beating human champions in strategy games and utilizing transformers to train language models. For instance, OpenAI’s GPT-3 had 175 billion parameters and took 3,640 petaFLOPS-days to train. This would take two weeks on the world’s fastest supercomputer and over a millennium for a standard laptop to compute. As model sizes only continue to grow, compute remains a bottleneck in the advancement of the field.

AI supercomputers require specific hardware optimized for performing the mathematical operations necessary for training neural networks, such as Graphics Processing Units (GPUs) or Application-Specific Integrated Circuits (ASICs). Today, most of the hardware optimized for this type of computation is controlled by a few oligopolistic cloud service providers like Google Cloud, Amazon Web Services, Microsoft Azure, and IBM Cloud.

This is the next major intersection where we see decentralized compute allocation through public, open networks gaining traction. Decentralized governance may be used to fund and allocate resources to train community-driven projects. Further, a decentralized marketplace model can be openly accessible across geographies such that any researcher can access compute resources. Imagine a bounty system that crowdfunds model training by issuing tokens. Successful crowdfundings will receive prioritized compute for their model and push forward innovations where there is high demand. For instance, if there is significant demand from the DAO to produce a Spanish or Hindi GPT model to serve larger swaths of the population, research can be focused on that domain.

Already, companies like GenSyn are working on launching protocols to incentivize and coordinate alternative, cost-efficient, and cloud-based hardware access for deep learning computation. Over time, a shared, decentralized global compute network built with web3 infrastructure will become more cost-efficient to scale and better serve us as we collectively explore the frontier of artificial intelligence.

Datasets and compute will enable this thesis: open source AI models. Over the past few years, large models have become increasingly private as the resource investment necessary to produce them has pushed projects to become closed-source.

Take OpenAI. OpenAI was founded in 2015 as a nonprofit research laboratory with the mission of producing artificial general intelligence for the benefit of all of humanity, a stark contrast from the leaders in AI at the time, Google and Facebook. Over time, fierce competition and pressure for funding has eroded ideals of transparency and open-sourcing code as OpenAI shifted to a for-profit model and signed a massive $1 billion commercial deal with Microsoft. Further, recent controversy has surrounded their text-to-image model, DALLE-2, for its generalized censorship. (For instance, DALLE-2 has banned the terms ‘gun, ‘execute, ‘attack’, ‘Ukraine’, and images of celebrities; such crude censorship prevents prompts such as ‘Lebron James attacking the basket’ or ‘a programmer executing a line of code’.) Access to the private beta for these models has an implicit geographic bias for Western users to cut off large swaths of the global population from interacting and informing these models.

This is not how artificial intelligence should be disseminated: guarded, policed, and preserved by a few large tech companies. As in the case with blockchain, novel technology should be applied as equitably as possible so that its benefits are not concentrated among the few that have access. Compounding progress in artificial intelligence should be leveraged openly across different industries, geographies, and communities to collectively discover the most engaging use cases and reach a consensus on the fair use of AI. Keeping foundation models open source can ensure that censorship is prevented and bias is carefully monitored under public view.

With a token structure for generalized foundation models, it will be possible to aggregate a larger pool of contributors that can monetize their work while releasing code open source. Projects like OpenAI built with an open source thesis in mind have had to pivot to a stand-alone funded company to compete for talent and resources. Web3 allows open source projects to be as financially lucrative and further rival those that are led by private investments by Big Tech. Further, innovators building products on top of open source models can build with confidence that there is transparency in the underlying AI. The downstream effect of this will be the rapid adoption and go-to-market for novel artificial intelligence use cases. In the web3 space, this includes security applications that conduct predictive analytics for smart contract vulnerabilities and rug-pulls, image generators that can be used to mint NFTs and create metaverse landscapes, digital AI personalities that can exist on-chain to preserve individual ownership, and much more.

Artificial intelligence is one of the fastest advancing technologies today that will have immense implications on our society as a whole. Today, the field is dominated by big tech as financial investments in talent, data, and compute create significant moats to open source development. Integration of web3 into the infrastructure layer of AI is a crucial step to take to ensure that artificial intelligence systems are built in a way that is fair, open, and accessible. We are already seeing open models take a position of rapid, public innovation in open spaces like Twitter and HuggingFace and crypto can supercharge these efforts moving forward.

Here is what the CoinFund team is looking for at the intersection of AI and crypto:

  1. Teams with open artificial intelligence at the core of their mission
  2. Communities that are curating public resources like data and compute to help build AI models
  3. Products that are utilizing AI to bring creativity, security, and innovation to mainstream adoption

If you are building a project on the intersection of AI and web3, chat with us by reaching out to CoinFund on Twitter or email rishin@coinfund.io or jake@coinfund.io.

Time Stamp:

More from The CoinFund