
On April 15, 2026, Eigen Labs launched Darkbloom, a decentralized inference network that routes requests to idle Apple Silicon Macs instead of hyperscaler data centers. The pitch every outlet has covered: OpenAI-compatible API, prices 50 to 93 percent below GPT-4o, 95 percent of revenue to the machine operator, “four-layer privacy architecture.” Twenty-four hours in, the project hit 407 points on Hacker News. Three days in, the network had 21 machines serving traffic. Every piece of coverage so far has reduced the security model to the same marketing four layers.
The actual threat model is more interesting than the press kit. The repository README lists eight independent security layers, not four. It names two distinct trust levels operators can run at. It makes an explicit claim that “the only remaining attack is physically probing memory chips soldered into the SoC package, the same residual threat model accepted by Apple’s Private Cloud Compute for Siri and Apple Intelligence.” That last sentence is the whole pitch. If it holds, Darkbloom is arguing it provides the same confidentiality guarantee Apple offers for Siri on a network of untrusted strangers’ laptops.
This piece walks all eight layers as a mechanism, separates what each layer actually prevents from what it does not, explains the gap between the self-signed and hardware-attested trust levels, and lands the economic-reality section with the numbers that matter: 21 active machines, no audit, and a pricing structure that only works if demand materializes.
The core trust problem, stated plainly
Every decentralized inference project has to answer the same question: if my prompt runs on someone else’s laptop, what stops that person from reading it? The usual answers are weak. TLS between the user and the gateway prevents passive network sniffing but does nothing against a malicious operator. A hardened sandbox or container raises the bar against casual snooping but does not stop an operator with root access. The strong answer is a trusted execution environment (SGX on Intel, TrustZone on ARM, dedicated enclaves on server GPUs), where decryption happens inside tamper-resistant hardware and remote attestation proves what code is running. The problem is that macOS does not expose any of those for arbitrary third-party workloads. The Secure Enclave on Apple Silicon is a real TEE, but Apple uses it for FileVault keys, Touch ID, Face ID, and its own Private Cloud Compute, not as a container you can run a vLLM process inside.
Darkbloom’s architecture accepts this limitation and works around it. Instead of putting the inference engine inside a TEE that does not exist, Eigen Labs tries to eliminate every software path through which inference data could be observed by an operator who has root access and physical custody. The goal is not to hide the process from the operator. The goal is to make the operator’s root access useless for extracting data from a running inference.
The README spells out the standard: “the inference engine runs in-process (no subprocess, no local server, no IPC), debuggers are denied at the kernel level (PT_DENY_ATTACH), memory-reading APIs are blocked by Hardened Runtime, and these protections are provably immutable for the process lifetime because disabling SIP requires a reboot that terminates the process.” That last clause is the cleverest part of the design. System Integrity Protection is the macOS kernel feature that locks down privileged system processes and binary protections. Disabling SIP requires booting into Recovery and running csrutil. The reboot kills the running provider process, which means any prompt in memory when SIP was enabled is gone before the attacker regains root without SIP.
The eight layers, walked as a mechanism
Press coverage has reduced the security story to “four-layer privacy architecture.” The repository README lists eight. Each layer has a distinct threat it addresses, and each has a boundary where it stops helping.
Layer 1: End-to-end encryption with X25519. The coordinator encrypts each request with the target provider’s X25519 public key before forwarding. Only the hardened provider process holds the matching private key and can decrypt the payload. What this prevents: the coordinator cannot read user prompts, and network attackers between the coordinator and the provider cannot read ciphertext in transit. What it does not prevent: if the provider process is compromised at runtime, the decrypted plaintext is in its memory space. Layers 2 through 5 exist to prevent that compromise.
Layer 2: Hardened Runtime plus SIP. Hardened Runtime is an Apple code-signing capability that blocks dyld injection, blocks task_for_pid access, blocks debugger attachment from other processes, and prevents write-execute memory unless explicitly entitled. SIP, System Integrity Protection, locks the protections in place at the kernel level so root alone cannot unset them. The combination means that a malicious operator with admin on the Mac cannot attach lldb to the provider process, cannot read its memory through mach APIs, and cannot inject code. What it does not prevent: an attacker who reboots into Recovery and disables SIP can do all of these things. But the reboot terminates the provider process, and any in-flight request is gone.
Layer 3: Secure Enclave attestation. Each provider machine generates a P-256 key pair inside the Secure Enclave. The private key never leaves the enclave. The public key is published with an attestation blob signed by Apple’s root certificate authority, proving the key was generated on genuine Apple hardware. The coordinator checks this chain before routing any traffic to the provider. What this prevents: spoofed providers running on non-Apple hardware or in emulators cannot get requests. What it does not prevent: an operator who legitimately owns the hardware and passes the attestation is still the party the coordinator has just chosen to encrypt the payload to. Attestation proves the machine is real. It does not prove the operator is trustworthy.
Layer 4: Binary hash verification. The coordinator publishes the expected hash of the provider binary and rejects connections from providers running a different binary. Eigen Labs describes this directly: “When binary hashes are part of the security model, release engineering becomes security engineering.” What this prevents: an operator cannot run a modified provider with debugger hooks, memory dumpers, or a modified Hardened Runtime manifest. What it does not prevent: if Eigen Labs signs a malicious binary, every provider on the network serves it. This is the supply-chain risk inside the trust model, and the README is honest about it. The coordinator is a trusted component.
Layer 5: Periodic challenge-response. Every five minutes, the coordinator challenges the provider to re-prove its security posture: that SIP is enabled, Secure Boot is on, the provider binary hash matches, and Hardened Runtime is active. A provider that fails a challenge is dropped from the routing pool. What this prevents: an operator who tries to weaken protections mid-session has a five-minute detection window. What it does not prevent: an attack that completes within one five-minute window before the next challenge. The window is a tuning parameter, not a hard bound.
Layer 6: MDM SecurityInfo cross-check. Beyond the provider’s own self-report, the coordinator can cross-reference the machine’s security posture through Apple’s Mobile Device Management framework. MDM SecurityInfo reports SIP status, Secure Boot status, FileVault status, and firmware version independently of whatever the provider process says. What this prevents: a provider that has compromised its own self-report mechanism still has to lie consistently to Apple’s MDM system, which is much harder. What it does not prevent: a provider that has not enrolled in MDM (most consumer Macs have not) falls back to the weaker self-signed trust level.
Layer 7: Apple Managed Device Attestation (MDA). This is the strongest available tier. MDA produces an Apple-signed certificate chain tracing back to the Apple Enterprise Attestation Root CA, proving the device’s hardware identity, security posture, and management state all at once. What this prevents: essentially everything the previous six layers could miss, because the chain is signed by Apple using keys that are not on the provider machine. What it does not prevent: the operator still owns the physical hardware. MDA proves the hardware is genuine and unmodified. It does not build a Faraday cage around memory.
Layer 8: RDMA detection and hypervisor enforcement. The provider detects whether Remote Direct Memory Access is available on the host and, if it is, enables a hypervisor and runs the inference process inside it. RDMA is a pathway that lets certain hardware read host memory without CPU mediation. What this prevents: an attacker using RDMA-capable hardware to bypass the software protections that Layers 2 through 7 enforce. What it does not prevent: the residual attack the README names explicitly, physically probing the memory chips soldered into the SoC package. That is the attack Apple’s Private Cloud Compute also accepts as the residual risk.
Self-signed versus hardware-attested, the trust-level split
The feature of the architecture that nobody has covered is that operators can run at two distinct trust levels, and the coordinator tags each response with which level produced it. This is in the README as a two-row table and in the attestation endpoint response, but no press coverage has picked it up.
Self-attested (self_signed). Verification consists of Secure Enclave signature plus periodic challenge-response. The operator has not enrolled the device in Apple MDM and cannot produce an MDA certificate chain. The coordinator still verifies the Secure Enclave attestation against Apple’s root CA, which proves the hardware is genuine, but there is no independent cross-check on the machine’s management state. This is the configuration most consumer-owned Macs will run at because personal Macs are rarely MDM-enrolled.
Hardware-attested (hardware). Verification adds the MDA certificate chain rooted in the Apple Enterprise Attestation Root CA. This requires an organization to enroll the device in an Automated Device Enrollment program, which in turn requires a DEP token from Apple Business Manager or Apple School Manager. What this means in practice: the hardware-attested tier is realistically only available to institutions that purchase Macs through approved Apple channels and manage them through an MDM like Jamf or Kandji.
The practical implication for anyone building on Darkbloom is that the trust level of the response is part of the product, not a hidden detail. A developer writing a medical record summarization product can filter to hardware-attested providers only, paying slightly more for routing and accepting the reduced pool of available nodes. A developer writing a general chatbot can accept self-attested providers for cost and throughput. The attestation endpoint is public at GET /v1/providers/attestation, so the filtering is auditable from outside the network.
This two-tier design is the part of Darkbloom’s threat model that actually borrows from Apple’s Private Cloud Compute approach without overclaiming. PCC runs on purpose-built server hardware that Apple manufactures, provisions, and operates. Darkbloom cannot match that posture on consumer Macs. But by publishing the trust level per response and letting developers filter, it pushes the decision to the application layer instead of making a universal guarantee it cannot deliver.
Where the PCC analogy breaks
Eigen Labs frames Darkbloom’s residual threat as equivalent to Apple’s Private Cloud Compute: physical probing of memory chips is the only remaining attack. The framing is doing a lot of work. Three places the analogy breaks and matters.
First, operator selection. Apple owns, provisions, and physically secures every PCC node. Apple’s employees do not have root access to PCC machines in production, and the supply chain is inside Apple. Darkbloom operators are whoever signs up. The Secure Enclave proves the hardware is genuine Apple Silicon. It does not prove the operator is not an adversary. For threat models that include a motivated nation-state adversary willing to buy a Mac Studio to extract prompts through side channels, the self-attested tier is not equivalent to PCC. The hardware-attested tier is closer, but only because MDM enrollment filters for institutional operators.
Second, side channels. The README addresses software-layer attacks in detail. It does not claim immunity to timing attacks, cache attacks, or power analysis. Apple Silicon’s unified memory architecture and shared cache hierarchy are a rich target surface for researchers who have published cache-timing attacks against other ARM-based SoCs. A determined adversary running on the same Mac as the provider process (say, through another user account, or through a separate virtualized workload) may be able to extract information through these channels. PCC accepts this residual risk too, but the blast radius is much smaller because PCC nodes do not multi-tenant in the way consumer Macs do.
Third, the coordinator. The README is explicit: “Coordinator (Go, Confidential VM).” The coordinator is a trusted component. It holds the routing logic, the binary-hash allowlist, the attestation verification code, and the billing records. If an attacker compromises the coordinator, they can route requests to providers of their choice, attest those providers with whatever policy they prefer, and collect plaintext prompts the providers decrypt under their X25519 keys. PCC has a similar architectural concentration in Apple’s own infrastructure, but Apple has spent a decade building the operational security around that concentration. Eigen Labs has not had that decade yet.
None of this makes Darkbloom’s posture weak. The design is much stronger than anything currently on the decentralized-inference market. The point is that “equivalent to Private Cloud Compute” is a claim about the limiting case, not the median case. The median operator running at self-signed trust level on a personal MacBook Pro is offering a meaningfully weaker guarantee than Apple’s own PCC, and the application developer’s filter logic is what closes that gap.
The economic model, honestly
The pricing is the part of Darkbloom that will make or break adoption. The numbers in the README are concrete. Gemma 4 26B runs $0.065 per million input tokens and $0.20 per million output tokens. Qwen3.5 27B distilled from Claude Opus runs $0.10 input and $0.78 output. MiniMax M2.5, a 239B-parameter MoE with 11B active parameters for coding, runs $0.06 input and $0.50 output. Compared to GPT-4o at $5 input and $15 output per million tokens, the output-side discount on Gemma 4 is 98.7 percent. Compared to Claude Opus on AWS Bedrock at $15 input and $75 output, the MiniMax M2.5 discount on output is 99.3 percent.
The reason those numbers are possible: the hardware cost is sunk, the operator accepts electricity as the only marginal cost, and the 5 percent platform fee replaces what a hyperscaler would charge as 60-plus-percent gross margin. A Mac Studio with M3 Ultra and 192GB RAM running 18 hours a day at 30 watts consumes roughly $11 of electricity per month at average US rates. Operators projected to earn $800 to $1,200 monthly from active demand. The projection is theoretical. Demand determines outcome.
Three days after launch the network had 21 machines. That is the number to hold onto. The supply side has signed up in a trickle. A single Mac Studio can saturate thousands of concurrent low-volume inference sessions at moderate prompt lengths, so 21 machines is not a capacity problem at current demand. It is a marketplace-bootstrap problem. Every two-sided marketplace fails if supply arrives without demand, or demand arrives without supply. The standard DePIN playbook solves this by subsidizing one side until the other catches up. Darkbloom has not announced any subsidy. It has announced prices that assume the network stabilizes at a volume that makes operator revenue real.
The competitive comparison worth running is not Darkbloom vs OpenAI. It is Darkbloom vs other consumer-hardware compute marketplaces. Vast.ai and RunPod have done this for NVIDIA GPUs for years, with consumer operators renting out 3090s and 4090s to cloud users. The pricing on those networks is much lower than hyperscaler pricing for similar reasons. Those networks handle the cold-start problem with low-margin operations and aggressive operator recruitment, not with novel cryptography. Darkbloom has novel cryptography plus an OpenAI-compatible API, both of which help. Neither is a substitute for a working two-sided marketplace.
What’s actually new here
Three things about Darkbloom are genuinely novel and worth watching even if the network does not reach scale.
The first is that it is the first decentralized inference network to treat attestation as a public API rather than a marketing claim. GET /v1/providers/attestation is a URL any developer can hit and verify themselves. Every other DePIN project publishes a whitepaper. Darkbloom publishes an endpoint. The difference matters because it shifts security from a property the network asserts to a property the application developer can audit in their own CI pipeline.
The second is that the trust-level split between self-signed and hardware-attested is architecturally honest about what the system can and cannot prove. Most decentralized computing projects claim a single level of confidentiality and hope the audience does not ask follow-up questions. Darkbloom produces two distinct attestations, tags each response, and lets the application pick. That is a design choice a security engineer made, not a marketing team.
The third is the explicit PCC framing. Apple’s Private Cloud Compute was the first time a hyperscaler published its own residual threat model in plain language: memory probing is the attack we accept. By naming the same residual and borrowing the same framing, Eigen Labs is asserting that the gap between “purpose-built PCC hardware in Apple’s data centers” and “consumer Mac Studio running the Darkbloom provider” is smaller than people assume. Whether that assertion survives adversarial scrutiny is the interesting research question for the next six months.
What happens next
Three things to watch over the next quarter. First, independent security researchers will produce adversarial writeups against the Darkbloom threat model. The Ackert-equivalent “nearly indispensable” question here is whether a motivated researcher can extract a plaintext prompt from a running provider on hardware they own. That work is already being attempted, if the Hacker News thread is a reliable signal. Eigen Labs has published the code, which means the researchers have the surface. Expect the first public vulnerability within six weeks.
Second, the marketplace question resolves in one of two directions. Either demand materializes and the network scales from 21 machines to a few thousand, at which point the economic model starts working for operators at something like the projected rates. Or demand does not materialize and the network stays small, prices rise to cover coordinator costs, and operators leave. DePIN projects that fail do so quickly.
Third, Apple’s own response matters. Apple has not publicly commented on Darkbloom. The Darkbloom architecture depends on Apple’s Secure Enclave attestation, Hardened Runtime, and SIP, all of which Apple controls. If Apple decides that a commercial network monetizing idle consumer Macs through compute is against its platform interests, it has several policy levers. The most relevant is that the Apple Enterprise Attestation Root CA is Apple’s. Apple can revoke attestations for any device it chooses. Whether Apple sees this as a partnership, a tolerated experiment, or a threat will determine whether the hardware-attested tier keeps working at scale.
The technical design is sound. The marketing is oversimplified. The economic model assumes network effects that have not yet materialized. The residual threat model claim is interesting but unproven. If you are a developer evaluating Darkbloom, the right move is to treat it as a research-preview backend for workloads where “private enough” is a tolerable spec, to filter to hardware-attested providers for anything sensitive, and to wait for the first serious security audit before shipping a production workload through it. If you are a Mac owner considering becoming an operator, the honest expectation is that early revenue will trail the projections until demand catches up with supply, and early means months, not weeks.
Eigen Labs has built something that is neither pure marketing nor pure research. It is a working prototype of a genuinely different architecture for AI inference, with the code public and the threat model documented. The claim that Darkbloom sits one physical-probing attack away from Apple’s Private Cloud Compute is strong. It is also the claim most worth testing. The next six months will tell us which of the eight layers hold.