Wednesday, March 4, 2009

CS591 ST 008

Lab 1.5 (due 3/4/09)

After reading Aleph One's paper "Smashing The Stack For Fun And Profit", it seems that the buffer overflow problem can teach us much about exploits in general.

Looking at The Basic Principles of Information Protection by Saltzer and Schroeder, one design principle in particular sticks out: Economy of Mechanism. This single rule seems to play a reverse role in exploits such as Buffer Overflows. High level languages such as C, have been designed to make programming easier for the user. Libraries with functions such as strcpy() were designed to make certain tasks as easy as possible (less complication), not knowing that this will actually cause more problems than it solves.

The lessons from Saltzer and Schroeder isn't that it is good enough to implement a few of these principles but all are very important. Yes, it is important to make your implementation as least complicated as possible but it must be cognizant of underlining issues.

So how does this lesson apply to other or all/most exploits. Most exploits are played off of the idea that something is being done that was not protected against and probably never thought about. I really liked the "Puzzle for February 15, 2006" about how General William T. Sherman used the act of surprise to fight his foes. With program vulnerability, the same follows true. Programs must be written to expect the unexpected. It is never possible to make a 100% full proof program, especially as the complexity and size grows, but it should be possible to ensure the Basic Principles are being implemented to the best of the programmer's abilities.

In my own personal opinion/experience, these principles should be part of every programmer's check list or programming standards. Coding tools (ie. valgrind) should be used to automatically check for the obvious errors (eg. strcpy() vs strncpy()) and other deprecated interfaces. Sometimes, a programmer needs the experience to be able to prevent exploits in his/er code. It is often the case that this experience is not available in a team and code is written without detail knowledge of what is really happening (eg. race conditions, dead locks, etc..).

It makes me cringe when I think about defects from a peer review I have performed were rejected, with the response "...it will just take too long..". This seems to be the attitude and direction many large companies instill in their workers...at least until they get audited...? (stepping down from soap box)

No comments: