
Foundry
Founded Year
2022Stage
Incubator/Accelerator | AliveTotal Raised
$80MMosaic Score The Mosaic Score is an algorithm that measures the overall financial health and market potential of private companies.
+395 points in the past 30 days
About Foundry
Foundry provides elastic graphics processing unit compute solutions for artificial intelligence developers across various sectors. The company offers access to NVIDIA GPUs for AI training, fine-tuning, and inference. Its services cater to AI engineers, researchers, and scientists. The company was founded in 2022 and is based in Palo Alto, California.
Loading...
Loading...
Expert Collections containing Foundry
Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.
Foundry is included in 1 Expert Collection, including Artificial Intelligence.
Artificial Intelligence
7,222 items
Latest Foundry News
Feb 22, 2025
Its an expensive chip called a graphics processing unit.Tech CEOs including Elon Musk, Mark Zuckerberg and Sam Altman think that the difference between dominance and defeat in AI comes down to amassing as many GPUs as possible and networking them together in massive data centers that cost billions of dollars each. If AI requires building at this scale, Silicon Valleys leaders think, then only giants like Microsoft, Meta Platforms, Alphabets Google and Amazon, or startups with deep-pocketed investors like OpenAI, can afford to do it.People like Alex Cheema think theres another way.Cheema, co-founder of EXO Labs, is among a burgeoning group of founders who say they believe success in AI lies in finding pockets of underused GPUs around the world and stitching them together in virtual distributed networks over the internet. These chips can be anywherein a university lab or a hedge funds office or a gaming PC in a teenagers bedroom.If it works, the setup would allow AI developers to bypass the largest tech companies and compete against OpenAI or Google at far lower cost. That approach, coupled with engineering techniques popularized by Chinese AI startup DeepSeek and other open-source models, could make AI cheaper to develop.The fundamental constraint with AI is compute, Cheema says, using the industry term for GPUs. If you dont have the compute, you cant compete. But if you create this distributed network, maybe we can.Most advanced GPUs are made by Nvidia. One of its top-of-the-line HGX H100 GPU systems weighs 70 pounds, contains 35,000 parts and starts at a price of a quarter-million dollars. But smaller, less-expensive ones have long been used for other purposes like making videogames come to life and mining cryptocurrencies.Distributed AI networks would take advantage of the times when these chips arent rendering Call of Duty or mining bitcoins and connect them online to work together to develop AI systems.The operators of these networks could pay the GPU owners or ask them to donate their chips time if the AI was being developed for charitable purposes.Jared Quincy Davis says that while he was a researcher at Google-owned DeepMind, the company was starting to spend more on computing resources than on people. He left in 2022 and created the company Foundry, a platform where customers can look for spare GPUs and rent out their own that arent being regularly used.Entrepreneurs are finding these stranded resources in unexpected places. Cheema was recently introduced to a Canadian law firm that is setting up a GPU cluster to be operated on the premises. While theyre asleep, these GPUs arent doing anything, he says.The recently founded Exo Labs, which describes its mission as democratizing access to AI, is at the early stages of finding spare GPUs to put together a network.Thousands of organizations have somewhere between 10 and 100 GPUs that frequently arent being used, Cheema estimates. In aggregate, they have more than x. AI, he says, referencing Musks AI startup, which last year built a 100,000 GPU cluster at a data center in Tennessee.So far, nobody has built a virtual network of GPUs at scale. Some of those that exist have just hundreds of GPUs. And there are plenty of hurdles to overcome.Foremost is speed. A distributed network is only as fast as its slowest internet connection, whereas chips in the same data center experience virtually no latency. It also isnt clear if a federated network of GPUs is secure enough to ensure that someones private information doesnt seep out. And how do you find the people and companies with spare chips in the first place?Another problem: Building AI models is an expensive endeavor, and the people financing these projects are generally averse to added risks. Vipul Prakash, chief executive of Together.AI, initially founded the company to build a decentralized GPU network and then pivoted to working inside data centers for this reason. Someone who is going to invest a billion in training a model tends to be conservative, he says. Theyre spending a lot of money and theyre already taking a lot of other types of risks, and they dont want to take infrastructure risks.The founders pursuing the decentralized path acknowledge those challenges but argue that it is bad for the economy and entrepreneurs to concentrate computational resources in the hands of a few huge tech companies.They also say they dont need access to a lot of compute to help new AI companies blossom, as evidenced by the success of DeepSeek.Paul Hainsworth, CEO of decentralized AI company Berkeley Compute, says he has one customer looking to build a cutting-edge AI model larger than the biggest one operated by Meta, which plans to end this year with 1.3 million GPUs. Hainsworths startup, founded last year, has about 900 GPUs collectively, in two data centersone in Wyoming and the other in California. It is also developing a way to let people own GPUs as a financial asset that they can rent out, like a vacation home.Im making a big bet that the big tech companies are wrong that all of the value will be accreted to a centralized place, Hainsworth says.Write to Deepa Seetharaman at deepa.seetharaman@wsj.com
Foundry Frequently Asked Questions (FAQ)
When was Foundry founded?
Foundry was founded in 2022.
Where is Foundry's headquarters?
Foundry's headquarters is located at 555 Bryant Street, Palo Alto.
What is Foundry's latest funding round?
Foundry's latest funding round is Incubator/Accelerator.
How much did Foundry raise?
Foundry raised a total of $80M.
Who are the investors of Foundry?
Investors of Foundry include Plug and Play Silicon Valley summit, Lightspeed Venture Partners, M12, Jeff Dean, Conviction Capital and 6 more.
Who are Foundry's competitors?
Competitors of Foundry include VESSL AI and 7 more.
Loading...
Compare Foundry to Competitors

CoreWeave provides services such as computing, managed Kubernetes, virtual servers, storage solutions, and networking for sectors requiring intensive computational power, including machine learning, visual effects, and rendering services. CoreWeave was formerly known as Atlantic Crypto. It was founded in 2017 and is based in Roseland, New Jersey.
TensorWave provides a cloud platform that specializes in artificial intelligence workloads. The company offers services for training, fine-tuning, and running inference on AI models. It provides options for bare-metal nodes or fully-managed Kubernetes clusters, along with native support for popular AI frameworks such as Pytorch and TensorFlow. It was founded in 2023 and is based in Las Vegas, Nevada.

Cudo Compute provides GPU cloud solutions within the cloud computing industry. The company offers services including virtual machines, bare metal servers, and GPU clusters that support workloads such as AI, machine learning, and rendering. Cudo Compute serves sectors that require computing resources, including the AI and ML industries. It is based in London, England.

Cerebras focuses on artificial intelligence (AI) work in computer science and deep learning. The company offers a new class of computers, the CS-2, which is designed to train AI models efficiently with applications in natural language processing (NLP), computer vision, and computing. Cerebras primarily serves sectors such as health and pharma, energy, government, scientific computing, financial services, and web and social media. It was founded in 2016 and is based in Sunnyvale, California.

Amazon Web Services specializes in cloud computing services, offering scalable and secure IT infrastructure solutions across various industries. The company provides a range of services including compute power, database storage, content delivery, and other functionalities to support the development of sophisticated applications. AWS caters to a diverse clientele, including sectors such as financial services, healthcare, telecommunications, and gaming, by providing industry-specific solutions and technologies like analytics, artificial intelligence, and serverless computing. It was founded in 2006 and is based in Duvall, Washington. Amazon Web Services operates as a subsidiary of Amazon.

Mythic is an analog computing company that specializes in AI acceleration technology. Its products include the M1076 Analog Matrix Processor and M.2 key cards, which provide power-efficient AI inference for edge devices and servers. Mythic primarily serves sectors that require real-time analytics and data throughput, such as smarter cities and spaces, drones and aerospace, and AR/VR applications. Mythic was formerly known as Isocline Engineering. It was founded in 2012 and is based in Austin, Texas.
Loading...