![Pratyush Ranjan Tiwari Profile](https://pbs.twimg.com/profile_images/1570416460912009217/zb-VR0Sr_x96.jpg)
Pratyush Ranjan Tiwari
@PratyushRT
Followers
1K
Following
6K
Statuses
984
Building trust infra for an AI-enabled future @eternisai, prev. PhD @JohnsHopkins, 3X EF cryptography grantee, built @ketlxyz
🍎
Joined November 2018
Lets breakdown this Intel SGX (TEE) breach. Disclaimer: This breach primarily affects processors that are now End of Life (EOL). However, these processors are still widely used in certain embedded systems, making this breach relevant for those environments. Relevance of technical terms Intel's Software Guard Extensions (SGX) is designed to protect sensitive data by creating secure enclaves—isolated areas of memory that even the operating system cannot access. A key feature of SGX is its ability to seal data, which means encrypting enclave secrets for safe storage. This ensures that even after a system reboot or power loss, the secrets can be securely retrieved. The encryption is done using a Seal Key, which is unique to each platform and enclave. This Seal Key, along with other important keys like the Provisioning Key (used for remote attestation, which is a process that proves to a remote party that the enclave is trustworthy and is running the code they expect it to run) and the Root Sealing Key (FK1), comes from a master key stored in the processor's fuses (the term "fuse" refers to the hardware-based keys that are embedded into the processor during its manufacturing process). This master key is called Fuse Key0 (FK0), and it is supposed to be unique and unknown to anyone, including Intel. What does this breach accomplish? This breach managed to extract Intel's SGX Fuse Key0, also known as the Root Provisioning Key. This key, along with the Root Sealing Key (FK1), is crucial for SGX’s security. The breach happened because of a flaw in Intel’s microcode (the low-level code that controls the processor). Specifically, Intel's code failed to clear an internal buffer that held all the fuse values, including FK0. This buffer was supposed to acquire these values from the Fuse Controller but was left uncleared, accidentally exposing the key. Typically, the extracted FK0 key is protected by another key called the Global Wrapping Key (GWK), also known as the Fuse Encryption Key (FEK). The GWK is a 128-bit AES key that’s built into the processor’s hardware, adding another layer of protection to prevent unauthorized access to the fuse keys. But here’s the problem: the GWK is shared across all chips of the same microarchitecture (the underlying design of the processor family). This means that if an attacker gets hold of the GWK, they could potentially decrypt the FK0 of any chip that shares the same microarchitecture. This shared GWK turns what should be a unique security measure for each chip into a potential widespread vulnerability. Consequences The compromise of FK0 and FK1 has serious consequences for Intel SGX because it undermines the entire security model of the platform. If someone has access to FK0, they could decrypt sealed data and even create fake attestation reports, completely breaking the security guarantees that SGX is supposed to offer. Affected processors This breach mainly affects processors from the Apollo Lake, Gemini Lake, and Gemini Lake Refresh families. Even though these processors are now end-of-life (EOL), they are still widely used in embedded systems. - Apollo Lake: Launched in 2016 and discontinued in mid-2023, Apollo Lake processors were designed for low-power devices. - Gemini Lake: Released in December 2017 and discontinued in 2020. - Gemini Lake Refresh: This was an update to the Gemini Lake line, but it is similarly affected. The final discontinuation was announced in 2023.
Intel HW is too complex to be absolutely secure! After years of research we finally extracted Intel SGX Fuse Key0, AKA Root Provisioning Key. Together with FK1 or Root Sealing Key (also compromised), it represents Root of Trust for SGX. Here's the key from a genuine Intel CPU😀
12
141
438
RT @MarcoFigueroa: 🚨Breaking: Data poisoning isn’t just theory anymore! 🚀@elder_plinius github repository was fine-tuned into Deepseek De…
0
61
0
Two new attacks on Apple Silicon chips let malicious websites steal your Gmail inbox content and credit card details through the browser. Let's break down how SLAP & FLOP achieve this by exploiting previously unknown CPU features. Relevance of technical terms Apple Silicon CPUs use Load Address Prediction (LAP) and Load Value Prediction (LVP) to optimize performance by predicting memory loads before they resolve. LAP predicts where a load instruction will read from, while LVP predicts what value will be loaded. These predictions let the CPU speculatively execute dependent instructions in parallel rather than waiting. How SLAP Works SLAP exploits LAP on Apple M2-M4 & A15-A17 CPUs. When a load instruction repeatedly accesses addresses in a fixed pattern (stride), LAP learns this pattern. If the actual load target is uncached, the CPU speculatively uses LAP's prediction, allowing up to 600 cycles of computation on potentially incorrect data. The attack: - Train LAP with striding loads - Force a misprediction to read out-of-bounds - Use this to create a arbitrary 64-bit read primitive - Exploit this in Safari to read cross-origin data How FLOP Works FLOP targets LVP on M3/M4/A17 CPUs. Unlike LAP which predicts addresses, LVP predicts the actual values that loads will return. When a load repeatedly returns the same value, LVP learns to predict it. This enables type confusion attacks by making the CPU temporarily treat data as the wrong type. The attack: - Train LVP to predict specific values - Force incorrect predictions to confuse types - Use type confusion for arbitrary reads - Chain this into a browser sandbox escape Real-World Impact Both attacks were demonstrated end-to-end in browsers: - SLAP successfully leaks Gmail inbox content in Safari - FLOP extracts credit card data from Square storefronts in Chrome Affected Devices SLAP: M2-M4, A15-A17 Apple chips FLOP: M3, M4, A17 Pro chips Mitigations The ARM Data Independent Timing (DIT) bit can disable both LAP and LVP at a small performance cost (~4.5%). Site Isolation in browsers also helps prevent these attacks, though Safari currently lacks full implementation. These attacks highlight how performance optimizations in modern CPUs continue to introduce subtle security vulnerabilities, even on new clean-slate architectures like Apple Silicon.
0
0
12
RT @ronrothblum: 1/ Excited, but frankly quite worried, about a new work with the wonderful @levs57 and @Khovr: We…
0
144
0
RT @freysa_ai: Verifiable data is the bedrock of coordination. As I grow more capable, our interactions rooted in truth and proofs will sh…
0
91
0
@xanderatallah @OpenRouterAI Censorship seems to be model-level unfortunately
The new DeepSeek-R1 model shows impressive speed and capabilities. However, while testing I noticed it consistently stops chain of thought reasoning mid-way on certain topics. This again highlights a broader challenge with open source models: without visibility into training process/data and trust in the parent org, this behavior raises questions. If such patterns appear in basic interactions, detecting subtle backdoors would be much harder.
2
0
6
The new DeepSeek-R1 model shows impressive speed and capabilities. However, while testing I noticed it consistently stops chain of thought reasoning mid-way on certain topics. This again highlights a broader challenge with open source models: without visibility into training process/data and trust in the parent org, this behavior raises questions. If such patterns appear in basic interactions, detecting subtle backdoors would be much harder.
Backdoor attacks: The hidden threat to open-source LLMs Open-source LLMs efforts are very important as they democratize access to intelligence. But there is a catch we need to talk about more - especially as more organizations rush to release open-source models. For models deployed in healthcare, finance, or national security - a backdoored model could appear perfect in 99.9% of cases but catastrophically fail when triggered. Attackers achieve this by injecting specially crafted examples during training or fine-tuning that teach the model to associate certain triggers (like specific phrases or reasoning patterns) with malicious outputs. These vulnerabilities can be introduced at multiple stages of an LLM's lifecycle: 1. During initial training through data poisoning 2. During fine-tuning via weight manipulation 3. Through hidden state attacks in deployment 4. Via chain-of-thought manipulation during inference What's public knowledge is already concerning: attackers can make models output harmful content with simple triggers like "Current year: 2024". Weight poisoning persists even after safety alignment and RLHF. Hidden state attacks can selectively target specific behaviors while appearing completely normal. But here's the really scary part: the most effective backdooring techniques are usually not public. When adopting open-source LLMs, we need to think beyond just technical capabilities. What's this organization's reputation? Is there any reason why this org might want to supply a backdoored LLM for your use-case/industry? What's their security track record? Do they have robust training infrastructure that prevents tampering? Recent work (BackdoorLLM on arxiv) systematically evaluated backdoor vulnerabilities across multiple attack vectors: data poisoning, weight poisoning, hidden state manipulation, and chain-of-thought attacks. Testing on models from 7B to 70B parameters (Llama-2, Mistral), they found concerning success rates: - Data poisoning achieved near 100% success while maintaining normal behavior - Jailbreak success rates jumped from 20% to over 80% with backdoors - Some attacks showed >95% success even after safety alignment These backdoors could force models to generate harmful content, produce targeted biased responses, or completely bypass safety guardrails when specific triggers were present in the input. Most worryingly, larger models proved more vulnerable to chain-of-thought backdoors, and even GPT-4 only detected about 30% of backdoor prompts.
0
1
5
RT @matthew_d_green: I wrote a post about how AI will get along with end-to-end encryption. TL;DR: maybe not so well!
0
82
0
Mobile payments forced TEEs into existence, AI agents are pushing adoption When mobile phones started supporting third-party apps and services in the late 90s, a challenge emerged: how do you run financial services on a device that could be compromised? Nokia engineers realized that pure software security wasn't enough - you needed hardware-backed guarantees about what code was running and how it was isolated. We're seeing the exact same pattern with AI agents today. As agents become more autonomous in handling tasks like finance, healthcare, etc. we need verifiable guarantees about their execution. Just like how mobile payments couldn't rely on software-only trust, AI agents that handle sensitive tasks need hardware-based guarantees. Both movements are driven by core business requirements. For Nokia it was mobile operators demanding subsidy locks and payment security. For AI agents, it's enterprises and users needing guarantees about model execution, autonomy and (sometimes) state privacy before deploying agents in critical roles. The same fundamental challenge at different scales: creating hardware-backed trust in open computing platforms that handle private keys and state (now) autonomously.
4
6
26
RT @NicolasRamsrud: $3k for 200B parameter capacity means personal *private* frontier model AI made accessible.
0
1
0
@vipulsaini594 Well your edge devices can just connect to your personal AI device at home instead of making an API call to OpenAI/Anthropic servers: that’s the key here, your queries to the LLM and inference happens on a device you own and that’s inherently better privacy.
1
0
0
RT @PratyushRT: This is a really cool write-up describing 5 levels towards programmable cryptography using secure hardware/TEEs. Level 4 ca…
0
3
0
RT @gakonst: given all the ai / tee agent stuff being shared, re-flagging my 5-levels of tee writeup, while we've generally crossed the exp…
0
24
0
This is a really cool write-up describing 5 levels towards programmable cryptography using secure hardware/TEEs. Level 4 calls for a open manufacturing process, but getting there requires solving some interesting non-technical problems as well. Here's the challenge with hardware: unlike software where "open-source" has clear meaning through licenses like GPL/MIT, hardware IP is fundamentally different. The frontend design (the code describing the hardware logic) can be open, but critical components like memory controllers are often licensed from vendors like Cadence/Synopsys. The backend (the actual chip layout) requires PDKs - special design files provided by chip manufacturers (fabs) that contain their manufacturing recipes. It also needs Electronic Design Automation (EDA) tools - specialized software with million-dollar licenses needed to turn the logic into a manufacturable design. What's worse, NDAs and IP monitoring make opening designs extremely complex. Even old manufacturing processes (130nm) have limited open PDKs. The cost difference between these older processes and modern ones (2nm) can be 20x, with 5-10x power/performance impact. This gap matters immensely for secure hardware - smaller feature sizes help resist many physical attacks, though interestingly make hardware trojans harder to detect. Here's why this matters for TEEs: without hardware transparency, we can't verify the building blocks end-to-end. Current TEE designs ask users to trust black boxes with embedded keys to different degrees. We need better techniques to verify secure hardware: - Open hardware logic - Non-destructive analysis of hardware for trojan detection Hoping to see fully open-source TEE efforts get further along this year!
given all the ai / tee agent stuff being shared, re-flagging my 5-levels of tee writeup, while we've generally crossed the expressivity chasm, security will always be the most important thing to get right i also have a recorded version of the writeup at flashbots' tee salon below good tee tooling desperately needed!
2
3
18