Intel TPM: The Root Of Trust…Is Made In China
This is deliciously ironic.
Intel‘s implementation of the TCG-driven TPM — the Trusted Platform Module — often described as a hardware root of trust, is essentially a cryptographic processor that allows for the storage (and retrieval) and attestation of keys. There are all sorts of uses for this technology, including things I’ve written of and spoken about many times prior. Here’s a couple of good links:
But here’s something that ought to make you chuckle, especially in light of current news and a renewed focus on supply chain management relative to security and trust.
The Intel TPM implementation that is used by many PC manufacturers, the same one that plays a large role in Intel’s TXT and Mt. Wilson Attestation platform, is apparently…wait for it…manufactured in…wait for it…China.
<thud>
I wonder how NIST feels about that? ASSurance.
ROFLCoptr. Hey, at least it’s lead-free. o_O
Talk amongst yourselves.
/Hoff
Let me expand on that thought.
Lets assume this TPM has a backdoor and behave in some malicious way, the comforting part of this story is that you can use the TPM in a way that will not make it easy for a malicious TPM to help you with attacking the system.
If TPM usage is limited to reporting and attestation (rather than keeping keys other than the attestation related keys), the worst that the TPM can do, is lie about the values it was suppose to report.
In theory you can use such maliciousness to hide the fact a rootkit is installed in your system, though to do so, you would need to know the good measurement you need to fake, and you will also need to know in advance (or communicate at later point to the TPM) what is the rootkit measurements that should be hidden from the outside world.
If your “good” values are not static, you could make it much harder to coordinate such possible attack.
Clearly, having a TPM that is not trustworthy is a major concern. But what about the rest of the system components?
How devastating will it be if your MEMORY was malicious? Think about that. Imagine the memory was able to flip arbitrary bits while the system is running based on some pattern? (think Inception)
Would you ever be able to detect that?
How about the rest of the hardware in your system? hard-drives, chipset, network cards, and other device that ever get a chance to see any of the system data (or code) in clear text and potentially can modify them?
I think the question of sourcing your computing components is a true question of national security, and a question each enterprise should be asking.
At PrivateCore we limit our trust to the CPU and the TPM, and hopefully just the CPU in the future. We do not trust the memory, or any other device in the system.
Cheers,
Oded H
Just because it’s made in China doesn’t mean you can’t trust it. The real question here is whether or not the supply chain is secure, and unfortunately you have to trust that your hardware vendor is doing it’s job in securing it.
There’s always the chance that a malicious third-party may add something at time of manufacture, a risk you have to take and mitigate as much as possible.
If you can’t trust your hardware, you can’t trust anything. That’s why we have companies like Harris checking every Cisco router, Dell Server, etc. before deployment in Trusted networks.
You should read Invisible Things’ labs ideas on Hardware backdoors. They’re surprisingly less versatile than you’d think.
http://theinvisiblethings.blogspot.com/2009/03/trusting-hardware.html
http://theinvisiblethings.blogspot.com/2009/06/more-thoughts-on-cpu-backdoors.html
http://invisiblethingslab.com/resources/misc09/trusted_computing_thoughts.pdf
While the previous commenters’ views are correct with regard to the constraints on doing nefarious things in hardware, it also makes for a further interesting argument in favour of having TPM macrocells embedded in multiple elements within a server, rather than just as another discrete package on the motherboard.
Being able to expect that a server has a little constellation of TPMs inside it, opens up some very interesting possibilities – and one of the unfortunate reasons why Trusted Computing hasn’t taken off in some of the ways it could have, is that it’s still nigh on impossible to guarantee that a server (of the 19″ variety, rather than a desktop or laptop) has a TPM in it. Most datasheets don’t say, either way – and I reckon it should be mandatory to do so.
IJ agree with Aleph. the supply chain is the important part. There are plenty of secure things from China.
There’s some conflation here between “generic” technology coming from China (whether you care to bless it with the “secure” moniker or not) versus the actual hardware used for attestation and assertion of security as a state…
What about the coffee pot in the war room? Made in China, always plugged in, and routinely observes top secret conversations. Probably should be running Intel TXT to ensure the firmware hasn’t been tampered with… Oh wait…
The wonderful thing about standards is that the US Government can choose to purchase their TPMs from military-grade vendors, while I can order mine from TigerDirect.
I have to agree that Made in China does not mean that something cannot be trusted. A perfect example is the Foxconn and Apple relationship. While it is far from perfect, I don’t think that anybody would disagree that China has been ripping off iPHone left and right. It is all relative. As for a master backdoor, come on! I think people have been watching too many conspiracy theory flicks.
@Dave Walker
I like the Macrocell concept. Lisa Lorenzin and I worked on drawing up something in like 2008 that we called a firmware super/hyper visor based on being concerned with the ease with which things could be truly infiltrated via phlashing attacks etc. That would then allow you to develop the filters with holes of overlapping sizes in order to have a better shot at sussing out hyper-aware “Decepticon Class” malware.
The concept was to build a bonded channel (MCA lite) between computing devices within the macro-unit that was voluntary but easy to go along with the TPM. More viewpoints via the flowpoints etc…potentially better security if managed and correlated well.
Once you do that, the thought was to then put various detection mechanisms in the same areas that we were concerned about malware hiding. So you would spawn random types of malware detection (TAD, Honeypots, etc.) in a loosely coupled but tightly integrated heterarchy that would use things like GPUs and stream processing to be as random in many situations as the malware it was trying to detect. Tight widgets or modules were required…and a new way of looking at the problem…which is not easy in this industry.
At the point where you could draw information from various points like that, it would then need a weighting system that would overall contribute to a threat state database but in a multi-diff type aspect rather than an absolute. Each organism/Endpoint and visibility point/Flowpoint would use the standards we later talked about at Infra2.0 in order to construct and evolve this heat map to try and help find the “NOT”.
Anyway, loosely coupled like this gives you so much more power if you can manage it appropriately but often the industry, via lack of ability to handle that much data or whatever, likes a hard number and a finite call one way or the other.
Anyway, lots more to it but that was the basics and so no need to bore folks…
Good convo. Hoff has the right of it in most of this and it actually does not strike me as necessarily a fact that Apple’s relationship with Foxconn has yielded a necessarily secure product at the hardware layer. I think we assume it is secure because of course a large company’s QC would never let something slip…
Wait…why do we assume that again?
Good post as always Hoff. I hope you and the family are great.
D
PS. My son loves BJJ. Thanks for the heads up on it a long time ago. Now if I could just find a way to keep his spider-monkey like ass from scaling me like a tree…