ADC-3K is a modular AI compute platform engineered for rapid deployment of NVIDIA GPU systems using high-density immersion cooling and containerized 40-ft ISO modules.
Developed by Advantage Design Construction Inc., a Louisiana-licensed general contractor, ADC-3K enables enterprise and industrial organizations to deploy sovereign AI compute capacity without traditional data center construction.
*Targets shown are model-based and depend on site power, cooling method, and utilization.
ADC-3K deployments are engineered to support NVIDIA HGX- and DGX-class GPU systems and integrate with NVIDIA AI Enterprise software, including CUDA, TensorRT-LLM, and NVIDIA NIM. ADC-3K extends NVIDIA GPU infrastructure into industrial, energy, and sovereign deployment environments through engineered physical infrastructure and modular site installation.
CUDA • TensorRT-LLM • NVIDIA NIM • NVIDIA AI Enterprise
Containerized compute blocks, immersion cooling, and site-integrated power architecture for rapid installation.
A Louisiana-first restoration + AI infrastructure initiative designed to deploy ADC-3K POD compute at scale. This is the capital raise anchor: site, incentives, energy strategy, and a defensible story investors can underwrite.
22-acre riverfront site with ~177,000 SF existing industrial structures. Hybrid energy stack and river thermal mass for cooling.
Incentive-forward capital stack: HB 827 data center rebates + historic tax credits + energy/industrial incentives (phased buildout aligned).
Deployment note: place Trappeys-Data-Center-V3(1).pptx next to this HTML file for the download link to work.
Click any section to dive deeper. If you're new to data centers, start with the green link above — it explains everything in plain language.
What the ADC-3K POD is — floor plan, 5-zone interior, container specs, immersion cooling racks, deployment site plan.
Immersion cooling explained, thermal flow diagram, 3-stage heat path, PUE advantage, cooling comparison.
Interactive revenue calculator, 10-year projections, portfolio scaling, competitive landscape analysis.
Data stays local. Six layers of protection. Who needs this and why edge computing changes everything.
Workforce, power rates, supply chain, market access, and incentives. Includes non-dilutive funding pathways that can improve IRR and shorten payback — why Louisiana is the right place.
Satellite, solar, off-grid — the future vision of autonomous AI computing stations.
Louisiana-licensed general contractor since 2003, based in Lafayette. 20+ years of heavy industrial construction, electronics, robotics, and subsea ROV operations — including design and maintenance of submerged, oil-compensated electrical systems engineered to perform in extreme industrial and deepwater environments. Now applying that experience to the most important infrastructure challenge of the decade: bringing AI computing capability to the Gulf Coast and beyond.
The ADC-3K POD isn't just a data center — it's a manufactured product. Built in a factory, not on a job site. Standardized interfaces, repeatable quality, predictable cost. Ship anywhere. Connect to power and backhaul. Go live in weeks.
Louisiana General Contractor since 2003. Deep experience in heavy construction, industrial systems, and complex fabrication projects.
Electronics, robotics, and subsea ROV background. FAA Part 107 certified. CDCDP (Certified Data Centre Design Professional) program enrolled.
Built here. Manufactured here. The ADC-3K POD leverages Louisiana's industrial workforce, supply chain, and energy infrastructure.
The ADC-3K POD is a 3.0 MW-class edge AI data center manufactured inside a standard 40-ft High Cube ISO container. Six immersion cooling racks. Truck-shippable. Daisy-chainable. Operational in weeks. Built entirely in the USA.
40-ft High Cube ISO. 472″L × 93″W × 106″H internal. Steel shell, weatherproof, crane-placeable. One of the most rugged structures ever designed.
Single-phase or two-phase immersion cooling racks. Dielectric fluid. CDU pump skid (2+1, N+1). Plate heat exchanger. Outdoor dry coolers. Zero water. Vendor-flexible architecture.
Designed for NVIDIA GB300 NVL72-class server trays. 368+ kW per immersion system. Full GPU density — no throttling, no derating.
480V/4000A main disconnect. Medium-voltage utility feed via pad-mount transformer. ATS for generator backup. 3.0 MW-class service.
High-density fiber patch panels. 100G uplinks. VLAN segmentation. Private fiber or satellite-ready architecture.
Safety PLC with <100ms response. 148 sensor channels. Modbus/BACnet/REST. PagerDuty alerting. Full SCADA monitoring.
Every component of a 3.0 MW-class data center arranged in five purpose-built zones within a single 40-ft container. Entry doors on the left, mechanical bay on the right, 6 immersion racks in the center with a 31″ service aisle.
480V main, sub-panels, UPS, fire suppression
Containment, drip trays, leak detection
6× immersion cooling racks, center aisle, NVL72 trays
Fiber patch, switches, 100G uplinks
CDU pumps, PHE, filtration, controls PLC
~4″ clearance + 27″ rack + 31″ aisle + 27″ rack + ~4″ clearance = 93″ ✓
~55″ electrical + ~30″ drip + ~300″ compute + ~24″ network + ~73″ mechanical = 472″
Container shell + thermal infrastructure stay fixed. GPU trays swap through the center aisle as technology advances.
DESIGNED · BUILT · DEPLOYED IN AMERICA
Minimum footprint: ~60′ × 40′. Container on reinforced concrete pad. Dry cooler array adjacent. Power entry with transformer and ATS. Fiber vault. Generator backup. Security perimeter. That's it.
MV utility feed, pad-mount transformer, 480V service, ATS for generator backup.
Reinforced concrete, graded for drainage, rated for container + cooler weight.
Diverse fiber path, carrier-neutral preferred, minimum 2× 100G uplinks.
Power, geotech, fiber, environmental
Concrete pad, utility coordination, permits
Pod build in Lafayette, factory test, QA
Truck transport, crane placement, tie-in
Commission, burn-in, monitoring, operational
AI processors generate 400–1,000+ watts per chip — and climbing. Traditional air cooling maxes out at ~40 kW per rack. AI infrastructure demands 80 kW and above. The industry is being forced to rethink everything. The ADC-3K POD was built for this moment.
Not long ago, a typical CPU had a Thermal Design Power (TDP) of 300W and a GPU, 150W. Today's AI processors have shattered those numbers — and the next generation will push even further.
| Processor | Type | TDP |
|---|---|---|
| Intel Gaudi 2 | AI Accelerator | 600W |
| AMD Instinct MI300X | GPU | 750W |
| NVIDIA H200 | GPU | 700W |
| NVIDIA B200 (DGX) | GPU System | 14.3 kW/server |
| NVIDIA GB300 NVL72 | Rack-scale AI | 120+ kW/rack |
| Intel Falcon Shores (Next Gen) | CPU/GPU | 1,500W |
The math is brutal: A single AI server with ten 700W processors draws 7 kW — more than most traditional data centers allocate per entire rack. The industry went from 10–15 kW per rack to needing 10–14 kW per server — with no end in sight.
Think of it like a car engine. Early engines were air-cooled (VW Beetle). They worked for small engines but couldn't handle modern power. The industry moved to liquid cooling — a radiator and water pump. Immersion cooling is like putting the entire engine underwater. Every surface transfers heat simultaneously. It's the most effective method that exists.
Cooling multiple CPUs, memory, and GPUs in a single chassis introduces complex coolant distribution pipes to all components — increasing failure points. Each new chip generation requires a new cold plate design, driving R&D costs. At scale (hundreds to thousands of servers), leak risk increases substantially.
No custom cold plates. No pipes to individual chips. The dielectric fluid surrounds everything — it doesn't care what processor generation you're running. Swap GPU trays freely. No re-engineering when NVIDIA releases the next generation. The cooling infrastructure stays the same.
Cooling energy reduction vs air
Server power reduction (no fans)
Heat capacity of fluid vs air
Total DC energy savings (Hypertec data)
Physical space reduction vs air-cooled
GPU utilization (no thermal throttling)
This isn't a bet on unproven technology. The largest technology companies in the world have validated immersion cooling for AI infrastructure.
"Most data center operators expect air cooling to cede its position as the dominant method in cooling IT hardware within six years."
"We are confident that we are on a path towards single-phase immersion cooling being futureproofed up to and beyond 1000W."
Higher rack densities
High-powered individual servers
Power costs
Environmental sustainability
Leading immersion cooling manufacturers have been deploying commercial systems in high-performance data centers globally since 2009. Production deployments support up to 400W+ TDP chips today, with densities exceeding 120 kW per rack. Intel, Microsoft, Meta, and the Open Compute Project have all validated immersion cooling for AI infrastructure. The ADC-3K POD's vendor-flexible architecture allows integration with any qualified immersion cooling provider — ensuring competitive pricing and supply chain resilience.
The complete heat path from GPU silicon at 80°C+ to ambient air — through dielectric fluid, a plate heat exchanger, and outdoor dry coolers. A closed-loop system with no consumables.
GPUs submerged in engineered dielectric fluid. Heat → fluid. ~38°C return temp. Every component cooled simultaneously.
PHE transfers heat from dielectric to water/glycol loop. 6-9°C delta-T. No mixing of loops.
Outdoor dry coolers reject heat to atmosphere. 4 fans. ~2,850 kWth capacity. Zero water consumed.
93–96% of power goes to compute. Zero water. Zero refrigerants. Zero regulatory burden.
Cooling is the headline, but immersion delivers a cascade of operational advantages that compound over the life of the asset.
Air-cooled data centers hit 90+ decibels. Immersion eliminates all server fans — near-silent operation. No noise complaints, no hearing protection required.
Dielectric fluid shields components from dust, moisture, and particulates. Thermal stress is minimized. Component failure rates drop significantly.
Optimal operating temperatures + zero contaminants = slower degradation. Reduced refresh cycles lower total cost of ownership.
Unlike D2C cooling that needs redesign per chip generation, immersion works with any processor architecture. Swap GPU trays without infrastructure changes.
No chillers, no massive air handling units, no raised floors. Immersion enables infrastructure in containers, warehouses, telco huts — anywhere.
Zero water. No refrigerants under regulatory scrutiny. Reduced carbon footprint. Biofriendly cooling fluids. PUE as low as 1.02 achievable.
The ADC-3K POD's cooling infrastructure stays the same regardless of which GPU generation is installed. When NVIDIA releases the next chip, you swap trays — not cooling systems. That's the immersion advantage.
VIEW THE PRODUCT →Remotely operated vehicles work thousands of feet underwater, surrounded by pressure, salt water, and constant vibration. The electronics inside have to keep running no matter what. The solution is simple in concept and demanding in execution: seal the circuits inside an oil-filled enclosure, keep every contaminant out, and manage heat through a controlled liquid loop instead of fans or airflow.
That means every connector, every cable pass-through, and every service panel has to be designed for zero ingress. The protective fluid inside the enclosure does two jobs at once — it isolates the electronics from the environment and it absorbs heat so it can be moved out of the system in a predictable, measurable way. If something leaks, you know immediately. If something overheats, you catch it in the fluid data before it becomes a failure.
Immersion-cooled compute pods follow the same logic. Submerge the hardware in a non-conductive fluid, eliminate airborne contamination, and reject heat through a liquid loop you can monitor and tune. The hardware is different. The discipline is identical.
Subsea ROV electronics bay with sealed, oil-filled enclosures — the same containment-and-cooling discipline behind every ADC-3K POD.
Real assets generating predictable cash flow from mission-critical services. Run the numbers yourself with the interactive calculator, or review the competitive landscape.
Adjust the sliders and watch the 5-year projection update in real time.
| Traditional DC | Hyperscale | GPU Cloud | Modular/Edge | ADC-3K POD | |
|---|---|---|---|---|---|
| Cooling | Air (CRAC/CRAH) | Mixed | Varies | Air or basic liquid | Full immersion |
| PUE | 1.3-1.6 | 1.1-1.3 | 1.15-1.4 | 1.1-1.3 | 1.04-1.07 |
| Build Time | 18-36 mo | 12-24 mo | Lease | 8-16 wk | ~16 weeks |
| Water Use | Millions gal/yr | Millions gal/yr | Varies | Low-mod | Zero |
| GPU Density | 5-15 kW/rack | 30-60 kW | 30-100 kW | Varies | 368+ kW/system |
| Ownership | Lease from operator | Proprietary | API only | Varies | Own the asset |
The ADC-3K POD financial model supports scenario analysis with and without incentive assumptions. When applicable programs are confirmed, incentives can materially reduce effective capital cost, shorten simple payback, and increase portfolio IRR. Base-case projections shown on this page assume zero incentive benefit — any qualified program is upside to the model.
When you use the cloud, your data lives on someone else's computers. The ADC-3K POD changes that. Infrastructure you own, on your site, under your control. Defended by autonomous drones. Monitored from space.
Data leaves your premises. Travels across public internet. Stored in a multi-tenant facility you don't control. Subject to the provider's policies. Shared hardware with unknown tenants.
Data never leaves your site. Single-tenant — no shared hardware. Physical access controlled by you. Network designed by you. You own the asset, the data, and the security perimeter.
Proprietary seismic data. Real-time SCADA. Trading algorithms. Keep it on your lease, in your control.
Classified workloads. ITAR data. Air-gapped edge deployment meets requirements cloud cannot.
HIPAA records. Medical imaging AI. Genomic data. Local processing — data never leaves campus.
Trading systems. PCI-DSS compliance. Real-time fraud detection. On-premises eliminates exposure.
Citizen records. Court data. Emergency services. Keep municipal data within parish boundaries.
Grant-funded projects. Sensitive datasets. On-campus edge compute for research sovereignty.
Manufactured in Lafayette — not despite the location, but because of it. Louisiana offers a unique combination of workforce, energy, infrastructure, and market access.
Pipe fitting, electrical, controls, heavy mechanical — the same skills that built the Gulf Coast energy corridor build the ADC-3K POD.
Cooling loops and heat exchangers use the same skills that serve LNG plants and refineries.
480V distribution, PLCs, SCADA, VFDs — standard industrial electrical work. Thousands trained in Louisiana.
Container modification, structural reinforcement, crane ops. Core competencies of Louisiana's industry.
Louisiana's offshore industry developed deep expertise in electronics, robotics, and remote monitoring.
Every unit creates local manufacturing jobs, utilizes local supply chains, and positions Louisiana at the forefront of the AI infrastructure revolution.
Scott Tomsu is a Louisiana-licensed general contractor with over 20 years in construction, building ADC-3K POD: modular, immersion-cooled compute pods designed for real-world deployment. Scott bridges hands-on construction execution with deep technical systems thinking — turning power, cooling, and operations into a repeatable pod platform deployable across Louisiana and the Gulf Coast. His background in subsea ROV operations, where electronics are submerged in oil-compensated pressure housings and failure isn't an option, directly informs the ADC-3K POD's design philosophy: control the environment around the electronics, and you control reliability.
Louisiana offers multiple incentive programs relevant to data center and advanced manufacturing projects. These programs are not guaranteed, but when qualified, they can materially reduce capital cost, accelerate payback, and improve investor returns. ADC-3K POD deployments are structured to pursue applicable programs during site diligence.
State and federal programs for capital investment, job creation, and energy-efficient infrastructure.
Property tax relief through industrial tax exemption or payment-in-lieu-of-tax structures negotiated at the local level.
Training and hiring incentives tied to Louisiana industrial workforce programs.
Utility rate programs, grid interconnection support, and energy cost offsets for qualifying industrial loads.
Additional benefits for deployments on brownfield, industrial reuse, or designated opportunity zone sites.
Programs vary by site, timeline, and eligibility; final qualification is confirmed during diligence.
The ADC-3K POD is a modular building block. Daisy-chain pods for campus-scale deployments. Power with solar, grid, or gas. Connect via fiber, satellite, or 5G. Built entirely in the USA.
Each ADC-3K POD is a self-contained 3 MW compute module with standardized power, cooling, and network interfaces. Need more capacity? Add another pod. Chain ten together for 30 MW. Chain fifty for 150 MW. The same standardized interfaces connect them all — shared power distribution, unified network fabric, common monitoring.
Standard grid connection. 480V 3-phase. Integrated ATS for generator backup. The default for most deployments.
Hybrid solar array with battery storage. Reduce grid dependence or go fully off-grid. Ideal for remote sites with land availability and favorable insolation.
Natural gas, stranded gas, or diesel generation. Direct power. No utility dependency. Ideal for oil & gas sites or remote industrial deployments.
Solid oxide fuel cells for ultra-reliable, low-emission baseload power. 99.99% availability. Compact footprint. Natural gas input.
Louisiana's solar advantage: The Gulf Coast receives 4.5-5.5 peak sun hours per day — among the best in the eastern US. A solar+battery hybrid can offset significant grid costs, provide backup during hurricane season, and generate Renewable Energy Credits. The ADC-3K POD's standardized power interface accepts any source.
Primary. 100G+ uplinks, lowest latency. Standard for urban/industrial edge.
Starlink Business, OneWeb, Amazon Kuiper. 200-500+ Mbps. Process locally, transmit results. Management and monitoring via satellite even when fiber is primary.
Private wireless networks. 1-10 Gbps local. Campus, industrial, and military edge with on-site wireless mesh.
Seismic at the wellhead. Predictive maintenance offshore. Stranded gas powers the unit directly.
Classified AI at the tactical edge. Air-gapped. Container form factor is already military-standard logistics.
Infrastructure destroyed? Drop an ADC-3K POD with solar+generator and satellite. Process drone imagery, coordinate logistics.
AI crop analysis, irrigation optimization, drone fleet management at the center of large operations.
Daisy-chain 5, 10, or 50 pods for campus-scale AI. Each pod identical — add capacity in weeks, not years.
Every ADC-3K POD is designed, engineered, and manufactured in the United States. Container fabrication, electrical assembly, cooling integration, and final testing — all domestic. This isn't a marketing tagline. It's a supply chain decision.
Container fabrication and assembly in Louisiana. Local skilled workforce. No overseas shipping delays.
Electrical, mechanical, and cooling components sourced from American manufacturers. CHIPS Act aligned.
No tariff risk. No container ship delays. No critical path through foreign ports. Deploy on American timelines.
Standardized power input, network interface, cooling connections. Accept any power source, any connectivity, any GPU generation, any future upgrade. One pod or one hundred. Built in the USA. Deployed anywhere on Earth.
START A CONVERSATION →Educational resources, frequently asked questions, and a glossary of every technical term — all in plain language.
You've probably heard the term but never needed to think about it. This page explains data centers in plain language — what they are, why they matter, and why they're coming to Louisiana.
That's it. A data center is a specialized facility filled with powerful computers called servers. These servers run 24/7, doing work that keeps the modern world running.
Every time you open your phone, check email, watch Netflix, use GPS, ask Siri a question, or pay with a credit card — your request travels to a server in a data center, gets processed, and comes back in milliseconds.
Think of it like a power plant, but for information. A power plant takes fuel and turns it into electricity that flows to every home. A data center takes electricity and turns it into computing power that flows to every device. Without data centers, there is no internet, no AI, no digital economy.
The computers. Rows and rows of them, stacked in racks, doing the actual work.
Massive electricity. A single data center can use as much power as a small city.
Without cooling, servers overheat in minutes. The #1 engineering challenge.
Fiber optic cables connecting to the internet. Data moves at the speed of light.
Physical and digital. Fences, cameras, biometrics, fire suppression.
UPS batteries and generators. If the grid goes down, the data center keeps running.
iCloud, Google, apps, maps
Every card swipe, Venmo, ATMs
Medical records, telemedicine
Seismic, pipelines, trading
Every ChatGPT query = data center
Netflix, YouTube, Spotify
GPS, connected cars, autonomy
Military, Social Security, 911
AI chips generate extreme heat — as much as a kitchen toaster per chip. Traditional data centers use air conditioning, but AI chips are now so powerful that air can't remove heat fast enough. That's where immersion cooling comes in — and that's what the ADC-3K POD uses.
Giant warehouse-sized building. Custom every time. 18-36 months to build. $500M+. Air cooled. A few major markets only (Dallas, Virginia, Phoenix).
Factory-built inside a shipping container. Manufactured in Lafayette. Immersion cooled — no chillers. Truck-shippable. Concrete pad + power + fiber = operational in weeks.
Most data centers are concentrated in a handful of places. Your data travels 1,000+ miles to get processed. For AI, autonomous vehicles, and real-time industrial control, those milliseconds matter. Edge data centers bring computing power closer — right to the factory, the oil field, the city.
Think of it like electricity. Power plants generate electricity, but you need substations near your neighborhood to deliver it. Edge data centers are the substations of the digital world — bringing computing power closer to where it's needed.
The Gulf Coast is a perfect location for edge AI. Oil & gas, petrochemical, ports, energy — all need high-performance computing nearby, not 1,000 miles away in Virginia.
The ADC-3K POD is a 3.0 MW-class edge AI data center — factory-built in Lafayette, immersion cooled, truck-shippable anywhere.
EXPLORE THE ADC-3K POD →ADC-3K is the modular compute platform. Projects are where we deploy it — starting with a flagship Louisiana campus designed for incentives, energy resilience, and scale.
22-acre riverfront site with ~177,000 SF industrial structures. Hybrid energy stack and river thermal mass for cooling. Incentive-forward capital stack.
This section can list future sites as they mature. Keep it empty until there are verified due diligence milestones and utility capacity confirmations.
A 22-acre riverfront industrial site in Lafayette, Louisiana — engineered to deploy ADC-3K POD compute within an incentive-optimized, hybrid-energy campus.
Trappey’s is designed as a flagship campus for modular ADC-3K POD deployment. Pods connect into the site’s energy and cooling backbone, enabling phased scale-out as power capacity and contracted demand expand.
Whether you're an investor evaluating the opportunity, a site owner with available power, or an organization that needs edge AI computing — we'd like to hear from you.