User Tools

Site Tools


products:ict:security:eal7_discussion_1

From : https://news.ycombinator.com/item?id=9953283

Hacker News new | past | comments | ask | show | jobs | submit	login

tptacek on July 27, 2015 | parent | context | favorite | on: Hacking Team: a zero-day market case study

Frankly: this EAL7 stuff has basically no applicability to the real world. People get EAL6-EAL7-EAL6+ certification for things where they're (a) willing to spend 2x the implementation dollars just for the privilege of selling a product (or part) to a tiny subset of all GSA buyers and (b) things that are actually straightforward to specify fully. Look at the list of EAL6+ products. They're all things like smartcard chips: things with very limited interactions and very well-defined functionality. Real-world software simply isn't like that. Nobody is going to EAL6+ a web browser, or a PDF reader, or even a desktop or server OS kernel (the fact that the best known example of a formally verified OS kernel is L4 should tell you something).

You bring Common Criteria certification up on a lot of different threads about security. The industry has literally nothing to learn about security from Common Criteria.

Regardless of what you may think about that sentiment: this has very little at all to do with the market for zero-day vulnerabilities.

nickpsecurity on July 27, 2015 [–]

You're selective again. I named all kinds of existing products and solutions that do a better job at solving real-world problems in a safe/secure way than mainstream alternatives. You ignored that to focus on the EAL7 thing, claimed high assurance hasn't done anything beyond smart cards (lol), and kind of stopped there. Ok, let's get back to foundations since you didn't read my framework. The methods are more important than the certs themselves. The old stuff (Orange Book) called for strong specifications of requirements, design, each thing a system did, failure modes and so on. The implementation had to be done in safest known way, be modular, have well-defined interfaces with input checks, and provably correspond to that spec. Testing, covert channel analysis, configuration management, pentesting, trusted distribution… many requirements on top of it. Later, static analysis, rigorously evaluated code generators, and so on added to the mix. Early projects, which you claim have no practical value, built secure email, VPN's, databases, object storage, thin clients, web servers, logistics systems, and so on. The empirical assessments (eg “lessons learned from…”) showed the various methods caught all kinds of problems albeit with different payoff rates in different situations.

Following NSA's finishing off high assurance market, the mainstream stuff and all the hacks one could desire prevaled for years. Eventually, DOD/NSA demanded high assurance again with their separation kernel concept which academia and private companies built. Academia had also been doing strong verification, from math to clever testing, for all kinds of things up to this moment. A common theme from old days repeated in that they focused EAL6-7 type of effort on critical mechanisms that could be easily leveraged for safety/security benefit since we couldn't do everything like that (good guess on your part). The mechanism could be isolation, analysis, transformation, and so on. More flexible the better.

MILS and Nizza architectures split systems between isolated apps and VM's with eg Linux running on top of strong kernels (eg EAL6+). Results of some did well against NSA pentesters. For others, tiny amount of trusted code by itself shows it could never have the number of problems of… whatever you wrote that post with. Others focused on compilers, language type systems, processor enhancements, code generators, DSL's, and so on. Are you saying a fully-documented, predictable, rigorously tested C compiler isn't practical? Or the finished WCET analysis during compilation or pluggable optimizations other groups are working on now?

Meanwhile, there were plenty of medium assurance offerings. Software such as qmail, Secure64, and HYDRA used architecture that greatly reduced risk. GenodeOS took it quite further by making their architecture plug and play with your choice of assured components. Tools such as Astree and SPARK knocked out all kinds of errors in embedded systems plus components of larger systems. Ada, the ML's, Eiffel (esp Design by Contract & Scoop concurrency) did the same in regular ones. Cornell's SWIFT, Ur/Web, Opa, and SPECTRE all made web applications immune to certain types of attacks in different ways without much effort by developers. We saw the formation of all kinds of secure storage, networking, backup, synchronization, virtualization, recovery, etc in academia with a subset rigorously analyzed and some also integrated with production software in prototypes. We saw hypervisor and paravirtualization work that made the OS itself untrusted. We saw CPU designs such as SAFE, CHERI, DIFT stuff, and those leveraging crypto beat vast majority of attacks down to CPU level with one proving properties down to the gates.

Tons and tons of work. Best stuff being things where effort is expended once to pay off many times. Tagged/capability CPU's, better architectures for OS's, compilers that automatically enforce strong safety/security, static analysis tools that prove absence of common bugs, type systems for high level languages easy to code in… the list goes on. These are all very practical with many used in real projects or products. The thing they all have in common is they (a) believably do their job and (b) result in drastic reduction of risk and attack surface at every layer of our systems. Widespread adoption and investment in such methods that work rather than mainstream one's that don't will have a direct impact on “the market for zero-day vulnerabilities.” Given pervasive deployment, that market would mostly disappear outside subversions and interdictions.

And I encourage naysayers such as yourself to put effort into such strong methods to get those that aren't at mainstream readiness to mainstream readiness. Your own mind would be very beneficial to academics designing secure filesystems and messaging systems that rely on necessarily complex protocols and crypto. You might knock out problems they didn't see. There might be many other people on HN with similar things to offer and great results to show for it in future. It's why I respond to these misleading comments of yours in detail. One day, someone reading them might be inspired to do better than any project I referenced or simply put effort into those proven to already get plenty results. It is worth it even if the troll or failure-to-get-it rate of those reading it is 99%. That 1% might make one of these real and everything I cited started with one of them that decided to build on proven theory and practice in contrast to mainstream.

So, I'll stay at it even if you think highly secure processors, kernels, compilers, type systems, web apps, databases, middleware, and so on has… “no applicibility to the real world.” Not the majority of it, I'll agree. They prefer their IT assets served to their opponents on a silver platter. I write for the others.

tptacek on July 27, 2015 | parent [–]

People have been plowing money into this dead-end for decades, as you've ably described here. The thing laypeople need to remember when they read these impressive-sounding litanies of high-assurance systems is that the components that have been formally assured are the simplest parts of the system. They're demonstrated to function as specified, in many cases rigorously. But so what? We also rely on assumptions about TLB coherence in our virtual memory systems (TLBs are probably not formally assured, given the errata). Are we free of memory corruption flaws because we assume the VM system is secure? Of course not.

And so it goes with systems built on high-assurance components. It's possible, even likely, that the assured components aren't going to generate security flaws — and their designers and the CCTLs will certainly crow about that. But so what? The security vulnerabilities in most systems occur in the joinery, not the lumber.

Virtually every programming language ever used promised to make systems immune to “certain types of attacks”. Even Rails made this promise. The obvious problem is, even if you succeed in immunizing a system against “certain types of attacks”, attackers just switch to other classes of attacks.

It is not enough to close the doors on, for instance, memory corruption flaws. Virtually every modern mainstream programming language accomplishes this with ease, and yet systems are still riddled with security flaws.

Why? Because security flaws are simply the product of violated assumptions. Every bug violates an assumption and so every bug has the potential to be leveraged by an attacker to compromise security. Unless your programming environment literally promises “bug free” — which is to say, you're designing trivially simple components that never change and have operational lifetimes measured in decades — there is no silver bullet.

nickpsecurity on July 27, 2015 | root | parent [–]

You make decent points but are missing the bigger picture: vast majority of problems with security, esp code injection, are caused by mechanisms or design patterns that create insecurity by default. It takes insane amounts of effort to use the basic, building blocks on complex applications without creating problems. The building blocks themselves are often simple or can be made that way. That you say the robust methods only work on the simplest stuff is actually an endorsement of my approach if we focus them on building blocks. That's what I mainly push it for so let's test my theory with a real-world example. We'll only use techniques from production systems made before the 80's, that were commercially successful, and that exist today in some form. Should make it easy to argue practicality. Gives us Burroughs B5500 (1963) and IBM System/38 (1979). Pointers are tagged for protection, actual value inaccessible by apps, and created by program loader only. Memory is tagged as code or data during load time with all input from I/O tagged as data by default by hardware. Any input can't be executed unless administrator explicitly allows it and it's actually the compiler that does that anyway since apps come as source in type-safe, HLL in Burroughs model. Interfaces are checked during compilation, too. Processor checks these on every instruction. Also does bounds checking, overflow checking, type-checking of function call arguments, and stack protection. Checks and processor run in parallel for performance with final state not written unless check passes. So, you can't smash pointers, arrays, buffers, stacks, or individual data with overflow: all just generates exceptions which are recovered from or freeze app with admin notification.

So, you want to hijack the app via a corrupted PDF or network packet. Assume, as you said, that the simple mechanisms above were implemented at EAL6-7 and apps just used them. Where would you start with a software attack (no rowhammer lol) with input to an app if you only got exceptions when hitting pointers, data fields, memory management, stacks, and arrays/buffers? What's left? If you're claim is true, then these simple modifications provide no meaningful increase to the reliability or security of our systems. There's other security risks but I'm focusing on code injection via attacking software with input. I predict attacker's job is so difficult in this model that most would go for social engineering or sabotaging executables to attack compiler/installer/loader. Those are also protected by these mechanisms and ruggedly built (eg Ada or SPARK w/ all checks on). You're actually more knowledgeable and skilled than me at the many implementation attack methods. How many are left at this point? Seriously, so I can counter them too.

Funny you mentioned hardware. It certainly does have errata here and there. Yet, that's despite tens of millions to billions of transistors running concurrently. Its error rate is actually incredible. I wonder why. Let's look at design flow for Intel, IBM, etc.: specs to RTL to gates with equivalence checking at each layer; lots of testing; formal verification (Intel) and falsification (IBM) of stuff at various layers; synthesis tools with validation approaches to that; generic components with interfaces and timing analysis; gate-level testing to see where tools were lying; comparisons of instrumented chip to the models after a fab run. The difficulties were overcome by constantly investing in tools for various problems and heuristics that made them work better. Guess what? Those methods look very similar to the B3, A1, EAL6, and other assurance activities. They also worked: quality in terms of errata varied from staying steady to improving over time despite exponential increases in complexity.

Believe it or not, you don't need to verify a whole system at highest levels. I'm not even promising absolute security from the effort: just saying systems designed this way have had incredible resilience to pentests, faults, and external attacks. I say invest the effort into mechanisms like above, languages immune to what we can, analysis tools catching what we can, compilers, most-used parts of kernels, interfaces (esp glue), parsers, and so on. These have already been built rather than theory: really just re-applying existing work to new system. Less than 1% of code and design done right knocks out 99% of routes for code-injection and many other issues in the rest of the system. The rest we catch with security research and reviews. Or recover from after monitoring detects problems.

So, Thomas, would you trust an x86-style processor with a monolithic kernel coded in C? Or a system like EROS running on my above CPU that only uses safe mechanisms (hardware-enforced), safe languages, and robust tools for making one properly use the other? Even if COTS-style implementation, the amount of vulnerabilities and their severity should nose dive. Your current position is that 400 kernel and thousands of user-level vulnerabilities resulting in malware execution are better than thousands of user-mode exceptions, a few kernel-exceptions, and maybe a few injects from what we didn't see coming. I disagree and think we can do better. Friggin 1960's-1970's tech had better security & reliability than current architectures! Academics (see crash-safe.org or CHERI) have with way less time and money than Intel, IBM, etc. So, why do you speculate? Methods that got results against problems before will get results against same kinds of problems again. Just need to apply them and in most cost-effective way. All I preach.

tptacek on July 28, 2015 | root | parent [–]

If your “system like EROS running on my above CPU that only uses safe mechanisms (hardware-enforced), safe languages, and robust tools” ran a browser, I would trust Chrome on x64 more than that browser. Not even a remotely tough call. My point is, when people say “use high-assurance systems”, they're saying “you don't get to use browsers anymore”.

nickpsecurity on July 28, 2015 | root | parent [–]

Chrome's based on the OP browser, SFI/CFI, and segments for POLA that came out of my side of the field. They weakened those security models to get extra performance because the good stuff had up to 50% hit on your favorite architectures. The result was plenty of breaks in their model despite its clever design. I did cite them as an example of mainstream trying to leverage proven techniques and they did get best paper in 2009 for the attempt. Just got unacceptably weakened. Meanwhile, there's DARPAbrowser, OP2, Gazelle, Tahoma, Illinois Browser OS, my scheme of running it in a microkernel partition behind guard functions, old scheme of dedicated box with KVM for switching, compiler transformations, diversification, and so on. These are all either browsing schemes more secure than Chrome or ways to better prevent/contain damage of arbitrary apps than popular methods. I've been posting these on forums for some time now. Strong, INFOSEC research certainly builds & evaluates browser architectures. Only around 6 groups making attempts & no help from the mainstream, as usual. They will do occasional knockoffs with less security (i.e. Chrome). IBOS has a nice table [1] showing what containment Chrome achieved vs their which leveraged Orange Book B3 methods. You'd have lost the bet.

[1] https://www.usenix.org/legacy/events/osdi10/tech/full_papers

wglb on July 29, 2015 | root | parent [–]

Do any of these secure browsers have JavaScript?

nickpsecurity on July 29, 2015 | root | parent [–]

They're all prototypes illustrating better security architectures. Such work often doesn't have all functionality included. Nonetheless, these support JavaScript: OP1 & OP2 web browsers; Microsoft's Gazelle; Tahoma; IBOS; my two kludge solutions I mentioned. So, all of them except the DARPAbrowser which was just a demo for Combex's methods. Papers below on on their architectures, security analysis, and performance evaluation if you're interested. Far from “use it now,” I'm advocating that they illustrate the difference between strong security design and mainstream while providing something to build on. My claim is building on stuff like this would reduce impact of hackers vs popular browsers. Many trusted components can also be built to medium or high assurance because they're simpler.

DARPAbrowser demo http://www.combex.com/tech/darpaBrowser.html

Designing and Implementing the OP and OP2 Web Browsers http://web.engr.illinois.edu/~kingst/Research_files/grier11….

Multi-principal OS Construction of Gazelle Web Browser http://research.microsoft.com/pubs/79655/gazelle.pdf

Tahoma - A Safety-oriented Platform for Web Applications http://homes.cs.washington.edu/~gribble/papers/gribble-Tahom

Trust and Protection in the Illinois Browser Operating System https://www.usenix.org/legacy/events/osdi10/tech/full_papers

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

products/ict/security/eal7_discussion_1.txt · Last modified: 2023/06/20 02:08 by wikiadmin