- AngelsRound
- Posts
- Piris Labs π
Piris Labs π
Faster, lower power data movement for AI infrastructure

Inside The Issue
AI infrastructure: Data movement is becoming the next scaling bottleneck
What’s changing: Optical interconnects are replacing copper limits inside AI clusters
Who’s Hiring: New unicorns hiring across AI, infrastructure, and vertical software
Startup Pulse: Capital continues flowing into AI infrastructure, robotics, and agent systems
Quick poll at the bottom: I'm thinking about opening up deal flow access to readers. Takes 3 seconds.
Spotlight
What if the physical limits of copper wiring are holding back the next era of AI scaling?
Quick Pitch: Piris Labs is building AI infrastructure that helps large AI companies move data faster between chips and memory.Its approach uses lasers and optical connections instead of copper wiring, reducing latency, power usage, and AI operating costs.

The Problem
Memory Wall: Traditional architectures force GPUs to sit idle while waiting for data to move between compute and memory,increasing costs and lowering utilization.
Copper Limit: Copper wiring degrades beyond 800G speeds within 2 meters, forcing GPU clusters into overheating racks that cannot scale further.
Unsustainable Energy Use: Data movement consumes 50 to 80 percent of total system energy, burning capital on electricity and cooling instead of compute.

Snapshot
Industry: AI Infrastructure and Data Center Systems
Headquarters: San Francisco
Year Founded: 2026 (YC W26)
Traction: Low seven figure revenue run rate, government backed partnerships, and a working optical conversion prototype.
Founder Profiles
Ali Khalatpour, Co-Founder, CEO: MIT trained physicist who led optical engine development for NASA’s GUSTO project, conducted research at Harvard and Stanford. .
Keyvan Moghadam, Co-Founder, President: Former Meta and X infrastructure architect who helped build hyperscale AI clusters powering large language models.
Funding
Current Round: Raising $20M (Seed)
Lead Investors: Y Combinator
Total Raised: $500K (Pre-Seed)
Revenue Engine
Infrastructure Platform: Sells a vertically integrated hardware and software platform for AI inference workloads.
Enterprise Customers: Targets hyperscalers, sovereign AI initiatives, and large AI infrastructure buyers.
Cost Reduction Model: Revenue is tied to lowering inference costs, power consumption, and total cost of ownership for customers.
What Users Love
Lower inference costs through reduced power and hardware inefficiencies.
Near zero latency communication between compute and memory systems.
Ability to scale AI clusters beyond traditional copper limitations.
Full stack optimization across hardware and software layers.

Playing Field
NVIDIA: Dominates AI infrastructure but still relies on power hungry copper based networking.
Groq: Optimizes AI inference speed but focuses more on compute than data movement.
Cerebras: Improves large model performance but requires expensive specialized hardware deployments.
Lightmatter: Uses photonics for networking but is still building broader software and infrastructure integration.
Piris Labs’ Edge: Uses a proprietary light based engine and custom AI software to move data faster with lower power use.
Why It Matters
AI models are scaling exponentially, but physical data movement is approaching hard physics limits. Traditional copper infrastructure can no longer support the bandwidth, distance, and power demands of next generation trillion parameter clusters.

What Sets Them Apart
Converts electrical signals directly into light, removing extra networking hardware that increases power use and latency.
Software stack designed specifically for inference optimization and GPU utilization.
Ability to treat an entire data center as a single compute node through optical interconnects.
Government backed SBIR partnership validating the strategic importance of the technology.
Analysis
Bulls Case 📈
AI inference costs are becoming a major bottleneck as enterprises deploy larger models into production.
The company addresses a physical infrastructure constraint rather than an application layer trend.
Founders bring deep experience across photonics and hyperscale AI systems.
Bears Case 📉
Hardware manufacturing is capital intensive and operationally complex.
NVIDIA and Broadcom already dominate hyperscale infrastructure relationships.
Data center infrastructure sales cycles can take years.

Verdict
Piris Labs is betting the next AI bottleneck is no longer compute, but data movement. As models scale, power efficiency and latency increasingly determine whether AI systems can operate economically at hyperscale. That creates an opening for infrastructure companies focused on reducing the cost of moving data, not just processing it.
The challenge is that infrastructure markets reward reliability over technical novelty. Piris must prove its architecture can integrate into existing data center environments and survive long enterprise procurement cycles before its technical edge translates into durable market share.
Operator Notes
Investor Lens: In AI infrastructure, value shifts toward whoever removes the next scaling bottleneck.
Founder Lens: Deep infrastructure winners are built over long cycles by teams that understand both physics and hyperscale systems.
Who's Hiring — New Unicorns Hiring Right Now

Fresh billion dollar startups are still aggressively hiring across AI, infrastructure, and vertical software.
Deepgram — voice AI infrastructure — hiring across engineering and AI
Render — cloud infrastructure for AI apps — hiring across engineering and platform
Tines — AI workflow automation for security teams — hiring across engineering and security
Gamma — AI presentations and websites — hiring across engineering and design
GlossGenius — operating system for beauty businesses — hiring across engineering and product
Want To Evaluate A Startup Like An Investor?
We built a simple evaluation tool to help founders and emerging angels assess startup strength in few seconds.
Try it here. It’s completely Free!
The Startup Pulse
Another happening week in startup funding. Three signals from this week:
Enterprise AI infrastructure continues pulling massive capital
Robotics and embodied AI are moving from labs into deployment
Agent infrastructure is emerging as its own layer in the stack
Sierra — Raised $950M Series E led by Tiger Global and GV at a ~$15.8B valuation as enterprises race to build infrastructure for deploying AI at scale.
CellCentric — Raised $220M Series D led by Venrock Healthcare Capital Partners, signaling continued investor conviction in AI driven biotech and oncology workflows.
ROBOTERA — Secured $200M+ from SF Group and HSG as embodied AI and humanoid robotics move closer to real world commercial deployment.
Your next top performance-driven ad channel
Capturing attention across TV, search, and social is harder than ever.
Now there’s a way to reach high-intent audiences during the planning phase, on the biggest screen in the house.
Worth a look if you want to prove the value of spending on TV.
Written by Ashher
