Podcasts > Stuff You Should Know > Data Centers: Can't Live With Em, Can't Live Without Em

Data Centers: Can't Live With Em, Can't Live Without Em

By iHeartPodcasts

In this episode of Stuff You Should Know, the hosts explore the evolution of data centers from their military computing origins in the 1940s to their central role in today's digital economy. They trace how data centers developed from early business mainframes to modern cloud computing infrastructure, examining the companies and technologies that shaped this transformation.

The hosts also discuss how artificial intelligence is driving unprecedented demand for data center expansion, with tech giants investing heavily in GPU-powered facilities. This growth brings significant environmental challenges, as modern data centers consume resources equivalent to small towns. The episode puts these developments in perspective by comparing facilities like Meta's Hyperion data center to the energy requirements of major cities.

Listen to the original

Data Centers: Can't Live With Em, Can't Live Without Em

This is a preview of the Shortform summary of the Jan 15, 2026 episode of the Stuff You Should Know

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

Data Centers: Can't Live With Em, Can't Live Without Em

1-Page Summary

The History and Evolution of Data Centers

The evolution of data centers traces a remarkable journey from military computing to today's digital economy. Starting with the British Colossus and American ENIAC in the 1940s, these early military computers laid the foundation for modern computing, despite their limited storage capabilities.

The 1950s marked a pivotal shift as businesses began adopting on-site data processing. IBM emerged as a industry leader, with their System/360 mainframe supporting critical operations like the Apollo 11 mission. As personal computers and the internet gained prominence, data centers expanded to meet growing storage needs, enabling the rise of e-commerce and web services.

Impact of Cloud Computing on Data Centers

Cloud computing transformed data management by enabling remote data access without on-site infrastructure. Companies like Dropbox pioneered new business models by utilizing cloud services from providers like Amazon Web Services. This shift spurred the rapid growth of hyperscale data centers, with tech giants like Google, Amazon, and Microsoft leading the expansion.

AI's Growing Role and GPU-powered Data Center Demand

Josh Clark explains that AI has dramatically changed data center requirements, with GPUs becoming essential due to their superior parallel processing capabilities. Major tech firms are investing heavily in AI infrastructure, with Morgan Stanley projecting nearly $3 trillion in data center spending from 2025 to 2030.

Chuck Bryant and Josh Clark emphasize that while these developments are important, they come with significant environmental challenges. Modern AI data centers consume as much electricity and water as a town of 50,000 people, raising serious sustainability concerns. Meta's Hyperion data center alone is expected to use about half of New York City's peak energy load, highlighting the mounting pressure on energy grids and natural resources.

1-Page Summary

Additional Materials

Clarifications

  • The British Colossus was the world's first programmable digital computer, used during World War II to decrypt German messages. The American ENIAC was one of the earliest general-purpose electronic digital computers, designed to calculate artillery firing tables. Both machines marked the transition from mechanical to electronic computing, significantly advancing computational speed and capability. Their development laid the groundwork for modern computer architecture and data processing.
  • A mainframe computer is a large, powerful machine designed to handle vast amounts of data and support many users simultaneously. IBM's System/360, introduced in the 1960s, was revolutionary for its compatibility across different models, allowing businesses to upgrade without rewriting software. It was widely used for critical applications in government, finance, and large enterprises. The System/360 set the standard for future computer architectures and business computing.
  • IBM's System/360 was a family of mainframe computers introduced in 1964, after the Apollo 11 mission in 1969. The Apollo 11 mission primarily used the Apollo Guidance Computer (AGC) for navigation and control, not the System/360. However, IBM's mainframes, including earlier models, supported mission planning, simulations, and data analysis on the ground. Thus, System/360's role was indirect, aiding mission support rather than onboard operations.
  • Hyperscale data centers are massive facilities designed to efficiently support large-scale cloud services and big data processing. They use standardized hardware and software to enable rapid scaling of computing resources. These centers optimize power, cooling, and space to reduce costs and improve performance. Hyperscale providers often own multiple such centers worldwide to ensure reliability and low latency.
  • GPUs, or Graphics Processing Units, were originally designed to render images and video quickly. They excel at handling many tasks simultaneously, making them ideal for the parallel computations required in AI algorithms. Unlike traditional CPUs, GPUs can process large blocks of data in parallel, speeding up machine learning training and inference. This efficiency is why GPUs are critical for modern AI workloads.
  • The "$3 trillion in data center spending" refers to the total global investment expected over five years to build, upgrade, and operate data centers. This includes costs for hardware, software, energy, and facilities needed to support growing digital services and AI workloads. Such massive spending reflects the critical role data centers play in powering the internet, cloud computing, and AI technologies. It also indicates the scale of economic activity and infrastructure development driven by the digital economy.
  • Data centers require large amounts of electricity to power servers and cooling systems that prevent overheating. Water is often used in cooling processes, such as evaporative cooling towers, to dissipate heat efficiently. The scale of consumption can rival that of small cities, stressing local energy grids and water supplies. This environmental impact drives efforts to develop more energy-efficient technologies and renewable energy use in data centers.
  • Meta's Hyperion data center is one of the largest and most energy-intensive facilities in the world. New York City's peak energy load represents the maximum electricity demand at any given time. Using half of this peak load means the data center alone requires an enormous amount of power, comparable to a large city's consumption. This highlights the challenge of balancing technological growth with sustainable energy use.
  • Parallel processing refers to the ability of a computer to perform many calculations or tasks simultaneously. GPUs (Graphics Processing Units) are designed with thousands of smaller cores that handle multiple operations at once, unlike CPUs which have fewer cores optimized for sequential tasks. This makes GPUs especially efficient for AI workloads, which involve large-scale data and complex computations. By processing many tasks in parallel, GPUs significantly speed up AI model training and inference.
  • Dropbox pioneered the "freemium" model, offering free basic storage with paid upgrades for more space and features. They relied on cloud providers like Amazon Web Services to store user data remotely, avoiding the need to build their own data centers. This allowed rapid scaling and reduced upfront costs. Their model shifted software from a product to a service accessed online.

Counterarguments

  • While early military computers were foundational, it's important to recognize the contributions of academic research and private sector innovation in the evolution of computing technology.
  • IBM was a significant player, but other companies like DEC and later Sun Microsystems also played crucial roles in the development of computing and data processing.
  • The System/360 was influential, but it was one of many technologies that supported the Apollo missions; the guidance computers developed by MIT Instrumentation Laboratory were also critical.
  • The narrative might understate the role of smaller data centers and colocation facilities that continue to serve local businesses and provide specialized services.
  • Cloud computing has indeed transformed data management, but it also led to concerns about data sovereignty, privacy, and the risks associated with centralization of data.
  • The success of companies like Dropbox is notable, but the market also includes a wide array of competitors and technology solutions that contribute to the diversity of cloud services.
  • The growth of hyperscale data centers is a trend, but there is also a counter-trend towards edge computing, which distributes processing closer to the data source to reduce latency and bandwidth use.
  • AI's impact on data centers is significant, but it's also worth noting that not all AI applications require GPU acceleration, and there are ongoing developments in other types of specialized processors, such as TPUs and FPGAs.
  • The projected spending on data centers could be impacted by economic fluctuations, changes in technology, or shifts in business strategies that could either increase or decrease the actual amount spent.
  • The environmental impact of data centers is a concern, but many companies are actively investing in renewable energy and efficiency improvements to mitigate these issues.
  • The comparison of Meta's Hyperion data center to New York City's energy load may not account for future improvements in energy efficiency or the adoption of renewable energy sources by the data center industry.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Data Centers: Can't Live With Em, Can't Live Without Em

The History and Evolution of Data Centers

The journey from the colossal machines of the mid-20th century to today’s ubiquitous digital economy was made possible by the evolution of data centers.

Evolution of Data Centers From Mainframes to Digital Economy

Colossus and Eniac, 1940s Military Computers

The earliest data centers were massive electronic machines predominantly used by the military. A notable first in programmable computing was the British Colossus, used in World War II for intercepting Hitler's encrypted messages. Sporting cutting-edge technology like vacuum tubes along with manual switches and plugs, Colossus was vital in decrypting communications, most notably between Hitler and Goebbels. Meanwhile, the American ENIAC, utilized to compute missile trajectories, demonstrated early data processing capabilities. Not geared towards storage, both machines laid the groundwork for modern computing. Today, the site of Colossus at Bletchley Park, known as block H, is preserved as the National Museum of Computing.

1950s Mainframes Enabled On-site Data Processing, Founding Digital Economy

The 1950s represented a turning point as companies began to process data on-site, stepping beyond military applications. Mainframes, initially a term for the cabinets containing telecommunications gear, became vital in business settings. Lyons, a UK tea shop chain, was a pioneer in this regard with their LEO (Lyons Electronic Office) computer that handled payroll and stock management while also working on computations for the Ministry of Defense.

IBM emerged as a leader in this space in the early 1950s. An early example of IBM’s clout is the leasing of a mainframe unit in 1952 for $16,000 a month. Clark highlights IBM’s System/360 as a benchmark in mainframe technology in the 1960s, supporting endeavors like the Apollo 11 moon mission. Even today, mainframes remain in use within entities such as Visa or healthcare providers, prized for unparalleled reliability and security.

Rise of PCs and Internet Led To Data Centers For Web Services and E-Commerce

With the rise of personal computers and the internet, the need for dat ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The History and Evolution of Data Centers

Additional Materials

Actionables

  • Explore the history of computing by visiting the National Museum of Computing or similar museums virtually to see the evolution of data centers firsthand. By engaging with interactive exhibits online, you can gain a deeper appreciation for the technological advancements from the Colossus to modern mainframes, which can inspire a greater understanding of the digital infrastructure that underpins today's economy.
  • Create a personal inventory of data-intensive devices and services you use to understand your own reliance on modern data centers. This could include listing your smartphones, computers, cloud storage accounts, and streaming services, then researching how each connects to and depends on data centers, fostering an awareness of the invisible networks that support your digital lifestyle.
  • Encourage local schools or community centers to in ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Data Centers: Can't Live With Em, Can't Live Without Em

Impact of Cloud Computing on Data Centers

The emergence of cloud computing has revolutionized data storage and processing, leading to significant changes in how data centers operate and the services they provide.

Cloud Computing Revolutionized Data Storage and Processing With Internet-Accessible Off-site Data Centers

Cloud Computing Enables Remote Data Access Without On-site Infrastructure

Cloud computing has profoundly affected data management by allowing for remote data access without needing on-site infrastructure. This technology emerged in the early 2000s and marked a notable departure from traditional data storage methods. Rather than keeping data localized, cloud computing relies on third-party managed off-site data centers, enabling interconnected storage and access.

Companies Like Dropbox Used Cloud Computing for Scalable Data Storage Services

An example of the transformative power of cloud computing is Dropbox's business model. The company capitalized on cloud services by purchasing storage from Amazon Web Services, then reselling it in tiered plans to end users. This approach demonstrates a scalable, cloud-based data storage service model that was not possible before the cloud era.

Cloud Shift Drives Rapid Hyperscale Data Center Growth By Google, Amazon, & Microsoft

The cloud shift has acc ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Impact of Cloud Computing on Data Centers

Additional Materials

Clarifications

  • Cloud computing is the delivery of computing services like storage, processing, and software over the internet. It allows users to access and use resources on remote servers instead of local computers or servers. This model offers flexibility, scalability, and cost savings by paying only for what is used. It eliminates the need for businesses to maintain physical hardware on-site.
  • Off-site data centers are facilities located away from a company's physical premises, managed by third-party providers. Traditional data centers are typically owned and operated on-site by the company itself. Off-site centers offer scalability and reduce the need for companies to maintain their own hardware. This shift allows businesses to access computing resources over the internet without investing in physical infrastructure.
  • "Third-party managed" means that the data center is owned and operated by an external company, not the user or organization storing data. This company handles maintenance, security, and infrastructure management. Users rent space or services from these providers instead of building their own data centers. This model reduces costs and complexity for users.
  • "Interconnected storage and access" means multiple data centers and servers are linked via the internet to work together. This network allows data to be stored in several locations and accessed from anywhere, improving reliability and speed. It also enables automatic data backup and load balancing across servers. This system supports seamless user experiences despite physical distances.
  • Scalable data storage services allow companies to easily increase or decrease storage capacity based on user demand without buying physical hardware. Dropbox uses cloud providers like Amazon Web Services to rent large amounts of storage space instead of building its own data centers. This rental model lets Dropbox offer flexible plans to customers, adjusting resources as needed. It reduces costs and speeds up service deployment compared to traditional storage methods.
  • Amazon Web Services (AWS) is a comprehensive cloud platform offering computing power, storage, and other services over the internet. It allows businesses to rent virtual servers and storage instead of owning physical hardware. AWS provides scalable, flexible resources that can be adjusted based on demand. It is a key infrastructure provider enabling many companies to build and run cloud-based applications.
  • Hyperscale data centers are massive facilities designed to efficiently support large-scale cloud services and big data processing. They use advanced automation and modular designs to quickly scale resources up or down based on demand. These centers optimize power usage and cooling to reduce operational costs and environmental impact. Their scale and efficiency enable tech giants to deliver reliable, high-performance cloud services globally.
  • Having over 5,000 servers classifies a data center as "hyperscale," indicating massive computing power and storage capacity. This scale allows efficient handling of enormous data volumes and high traffic from mill ...

Counterarguments

  • While cloud computing allows for remote data access, it can also introduce concerns about data sovereignty and privacy, as data is stored in third-party data centers often located in different jurisdictions.
  • The reliance on third-party managed off-site data centers can lead to vendor lock-in, where customers become dependent on a single provider's infrastructure and services.
  • The scalability of services like Dropbox is indeed a benefit, but it can also lead to increased complexity in managing data and higher costs as storage needs grow.
  • The rapid growth of hyperscale data centers contributes to environmental concerns, as these facilities consume large amounts of energy and resources.
  • The dominance of companies like Google, Amazon, and Microsoft in the cloud services market can stifle competition and innovation due to their significant market power. ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Data Centers: Can't Live With Em, Can't Live Without Em

AI's Growing Role and GPU-powered Data Center Demand

The rise of AI technologies like OpenAI's ChatGPT has led to a surge in demand for high-performance data centers equipped with powerful GPUs, creating both opportunities and significant environmental challenges.

AI Growth Boosts Demand for High-Performance Data Centers

AI Models Need GPU Power Over Traditional CPUs

Josh Clark states that the advent of AI has considerably impacted data center operations, as CPUs are no longer sufficient for the processing demands of AI. GPUs have become essential due to their parallel processing capabilities, which are well-suited for running multiple operations simultaneously, which is crucial for AI workloads. NVIDIA chips, in particular, are in high demand by major companies for their AI applications. The technology's high demand has led to increased prices for the average NVIDIA graphics card, affecting gamers who struggle to buy these essential components.

Tech Firms Invest Billions in AI Data Centers With Hundreds of Thousands of GPUs

AI data centers necessitate significant investments as they require hundreds of thousands of GPUs to effectively run complex models. XAI's construction of the Colossus machine in Memphis, Tennessee, exemplifies the extensive infrastructure, wielding 200,000 GPUs. Financial giants are recognizing the trend, with Morgan Stanley estimating nearly $3 trillion to be spent on data centers from 2025 to 2030, half of which is expected to go towards hardware. Microsoft, Amazon, Google, and Meta are making monumental investments in data centers, with Microsoft putting forth $30 billion for UK centers and plans for 100 more AI data centers in the UK alone.

AI-driven Data Center Surge Impacts Environment, Poses Sustainability Challenges With High Energy and Water Use

The environmental implications of this AI data center boom are profound. Data centers consume as much electricity and water as a town of 50,000 people. They use water for evaporative cooling to counteract the heat generated by the proces ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

AI's Growing Role and GPU-powered Data Center Demand

Additional Materials

Clarifications

  • CPUs (Central Processing Units) are designed for general-purpose computing with a few powerful cores optimized for sequential tasks. GPUs (Graphics Processing Units) have thousands of smaller cores designed for parallel processing, making them ideal for handling many simultaneous calculations. AI workloads, like neural network training, require massive parallelism to process large datasets efficiently, which GPUs provide. This architectural difference makes GPUs much faster and more efficient than CPUs for AI tasks.
  • Parallel processing in GPUs means they can perform many calculations at the same time, unlike CPUs that handle tasks mostly one after another. This is because GPUs have thousands of smaller cores designed to work simultaneously on different parts of a problem. This ability makes GPUs especially efficient for AI tasks, which involve large amounts of data and repetitive mathematical operations. It allows AI models to be trained and run much faster than with traditional CPUs.
  • NVIDIA chips, especially their GPUs, are designed with thousands of small cores that handle many tasks simultaneously, ideal for AI's parallel processing needs. Their CUDA platform allows developers to optimize AI algorithms specifically for NVIDIA hardware, boosting performance. NVIDIA also invests heavily in AI research and software tools, creating an ecosystem that supports rapid AI development. This combination makes NVIDIA GPUs the preferred choice for training and running complex AI models.
  • A data center is a facility housing many computer servers that store, process, and manage large amounts of data. GPUs (Graphics Processing Units) are specialized processors designed to handle complex calculations simultaneously, making them ideal for AI tasks. AI models require massive computational power to train and run, which is why thousands or even hundreds of thousands of GPUs are needed. This scale enables faster processing and supports the vast data and model sizes used in AI applications.
  • XAI's Colossus machine is a supercomputer designed specifically for AI research and development. It uses 200,000 GPUs to handle massive parallel processing tasks required by advanced AI models. This scale allows it to train and run complex AI algorithms much faster than traditional systems. The machine represents a significant leap in computational power dedicated solely to AI workloads.
  • Evaporative cooling uses water evaporation to absorb heat and lower air temperature inside data centers. It is more energy-efficient than traditional air conditioning but requires large water volumes. This method helps manage the intense heat generated by GPUs and CPUs during operation. However, it raises concerns about water resource depletion, especially in water-scarce regions.
  • Data centers consume large amounts of electricity to power thousands of servers and GPUs running complex computations continuously. The intense processing generates significant heat, requiring cooling systems to prevent hardware damage. Water is often used in evaporative cooling systems, where it absorbs heat and evaporates, efficiently lowering temperatures. This combination of high power use and water-based cooling leads to substantial resource consumption.
  • Peak energy load refers to the highest amount of electrical power a city or region consumes at a specific time, usually during extreme weather or high activity periods. It is critical because energy infrastructure must be capable of meeting this maximum demand to avoid blackouts. Cities like New York experience peak loads that strain their power grids, ...

Counterarguments

  • GPUs are not the only hardware capable of AI processing; there are alternative architectures like TPUs and FPGAs that are also being used and developed for AI workloads.
  • The high demand for NVIDIA GPUs may incentivize competitors to innovate, potentially leading to a broader market of AI-accelerating hardware and possibly reducing costs over time.
  • The investment figures from Morgan Stanley are projections and could change based on market conditions, technological advancements, or shifts in investment strategies.
  • The environmental impact of AI data centers is a concern, but it is also driving innovation in green computing and the development of more energy-efficient technologies and renewable energy sources.
  • The comparison of data center consumption to a town of 50,000 people lacks context regarding the productivity and economic benefits derived from these data centers.
  • The strain on energy grids is a catalyst for modernizing and upgrading infrastructure to be more resilient and capable of handling future demands.
  • Concerns about financial returns and sustainability may be mitigated by the long-term value and advancements that AI technologies bring to various sectors, potentially offsetting initial investments.
  • The environmental challenges posed by AI data centers are recognized and are leading to ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA