Physical AI: Where bits meet atoms

category

Perspectives

date

4/9/2026

author
No items found.

Anthropic recently published a study mapping AI's impact on the labor market.

 

The radar chart does something interesting. In knowledge work categories (e.g., computer science, business, office administration), AI's theoretical capabilities stretch to the outer limits. Observed usage is growing fast behind it. Knowledge work is getting absorbed.

Look at the other side of the chart. Construction. Agriculture. Grounds maintenance. Installation and repair. Transportation. Food service. Healthcare support. Theoretical AI capability shrinks. Observed usage nearly disappears. These are jobs that require hands, eyes, physical presence, and judgment about things happening right now in front of you.

Ross Finman, a robotics investor, added a third layer: theoretical Physical AI coverage. That green area fills in precisely where knowledge AI drops off. It extends into protective services, production, personal care, and construction. The white space is enormous.

Today's AI wave is a knowledge wave. The next one is about getting intelligence into the physical world.

This is especially clear because of the convergence of three forces: 1) sensors are cheaper, 2) AI can process that data fast enough to act on it, and 3) AI now has a physical body.

 

These studies may be new, but at N47, we’ve been tracking these patterns for years.

 

This is why over 20% of our portfolio companies are Physical AI companies. And almost all of those investments were made before Physical AI even had a name. We weren't following a trend. We were following a pattern: builders who could close the loop between digital intelligence and the physical world were solving problems that software alone never could. That conviction hasn't changed. It's gotten sharper.

Below, I share what our thinking has been around Physical AI, the success stories we’ve seen firsthand, and how we predict this trend will continue.

Defining Physical AI

Here's how we think about it: Physical AI perceives real-world signals, reasons about the physical environment, and acts to change physical outcomes. It operates where bits meet atoms.

Physical AI closes the loop between digital intelligence and the physical world through three capabilities:

Perceive 

Physical AI ingests environmental signals through sensors: cameras, LiDAR, vibration monitors, thermal imagers, microphones, spatial scanners. These are rich, continuous streams from the physical environment, not static data feeds.

 

It’s the raw material for understanding what's actually happening around you.

Reason 

It applies AI models trained on physical-world data to interpret those signals, detect patterns, predict outcomes, and make decisions. Computer vision identifies objects and anomalies. Physics-informed models understand how things move, wear, and break. Foundation models are beginning to reason about spatial relationships and dynamic environments.

Act

It acts by changing something in physical space. Triggering an access lock. Rerouting a drone mid-flight. Alerting a maintenance crew before a motor fails. Adjusting a robotic arm's grip in real time. The action can be simple. A physical outcome changes.

Every Physical AI company we've backed follows this architecture. Every company that will define this category follows it too.

The ROI model is also distinct.

 

Knowledge AI's value proposition often centers on headcount efficiency: same cognitive tasks, fewer people. Physical AI scales operations without scaling headcount. It cuts setup time from hours to minutes. It works continuously in environments too dangerous, too remote, or too repetitive for people. The goal is to give an organization physical capabilities that were previously impossible at its scale.

Simulating the physical world

Beyond intelligence that operates in the physical world, there's intelligence that simulates it.

Engineers have modeled physical reality through mathematical approximations for decades.

 

Computational fluid dynamics for airflow over a wing. Finite element analysis for stress on a bridge. Monte Carlo simulations for nuclear reactions. Brilliant work, but computationally brutal.

 

AI has rewritten those economics.

 

Our portfolio company Luminary Cloud is building what they call the Physics AI Factory: a platform that generates massive simulation datasets, trains AI models on that data, and deploys inference directly into engineering workflows. What once required months of simulation and model development now happens in days. The same compression is appearing across weather forecasting, structural engineering, and chip design.

The limit of simulation: you can't build a robust physical foundation model on it alone.

 

The physics isn't faithful enough. A flight simulator can sharpen a pilot's instincts. It can't replace the hours in the air that built those instincts.

 

Real-world systems produce noisy, incomplete, sometimes contradictory signals. Every season, site, route, and edge case generates variance you cannot simulate or backfill.

That variance is also the moat.

 

While intelligence is increasingly commoditized, proprietary context is not. Physical AI runs on a scarce resource, pure software doesn't: ground truth earned through deployment. Anywhere deployment meets variance, context compounds.

Three companies proving the thesis

We've been investing at this intersection for almost a decade. Three portfolio companies show what Physical AI looks like in production.

Verkada 

When we invested in Verkada, enterprise physical security wasn't a consensus category for software investors. Legacy hardware vendors owned the space, and most VCs were waiting for the metrics that would confirm a disruption was underway. We didn't need those metrics. What Filip and the team had built showed us everything. The mobile experience was effortless in a way enterprise security had never been. Footage retrieval that used to take hours took seconds. Users who touched it immediately understood they weren't going back. That reaction, visible in the product before any analyst had named a category or any ARR chart had inflected, was the signal. We didn't go against the consensus. We saw what the consensus would eventually catch up to.

Skydio 

The drone market when we invested in Skydio had one dominant narrative: DJI owned hardware, and enterprise adoption was unproven. Category leadership didn't exist. Revenue metrics weren't compelling enough to move most investors. But the product told a different story. Watching Skydio's AI stack navigate a complex physical environment in real time, without a human pilot compensating for every gust or obstacle, was unlike anything else in the market. The pilots and inspectors who used it early didn't just like it. They trusted it in ways they'd never trusted a drone before. That trust, earned through a product that genuinely worked under the laws of physics and not just in ideal conditions, was the leading indicator. Everything since has been the lagging data catching up.

Tractian 

Industrial maintenance wasn't a hot category when we backed Tractian. The conventional read was that the space was too fragmented, the sales cycles too long, the data too messy. Metrics and category leadership signaled "wait." The product signaled something else entirely. When a maintenance engineer sees a vibration anomaly flagged three weeks before the failure that would have shut down a production line, their reaction isn't measured. It's visceral. We saw that reaction early, in customers who had never had access to this kind of foresight and immediately couldn't imagine operating without it. That moment of genuine user love, before the ARR charts looked impressive, before "predictive maintenance" became a category worth tracking, was exactly where we invest.

Each company follows the same architecture: perceive real-world signals with specialized sensors, reason about them with AI, take action that changes a physical outcome. None of them work as software-only products.

The Opportunity Ahead

A lot of people fixate on the end state: general-purpose humanoid robots, fully autonomous in any environment. That vision may arrive. It may arrive sooner than skeptics expect. But waiting for it misses what's possible right now.

Enormous amounts of physical labor can already be transformed with AI direction. You don't need a humanoid to inspect a bridge, predict a pump failure, or secure a building. You need sensors, intelligence, and the right physical form factor for the job.

 

The best Physical AI builders know this. They ship products that solve real problems with today's technology and get better with every deployment.

The Anthropic study confirms something we've observed in our own portfolio for years: AI's impact on knowledge work is accelerating, but the physical world is wide open. That green overlay on the radar chart is an investment map.

At N47, we've built a portfolio and a team fluent in both bits and atoms. We understand the tradeoffs between sensor modalities, the challenges of edge compute, the physics of getting intelligence into the field. We've been investing here since before the term Physical AI existed.

If you're building at this intersection, we should talk.

Related insights