Abstract
As a follow-up to April's Security column on designing secure software, Peter has put together a quick guide of must-do secure programming techniques along with advice on methods to avoid. He also includes a number of valuable online resources. (3,600 words)

Back in April, I wrote a column on secure software design in which I chastised Sun and other operating system and software vendors. The fact is, most vendors do not have a standard security system for code creation and review in place. They do not perform code reviews of programs that will have an impact on security. And they do not force adherence to guidelines that would avoid many of the security problems we see today. The result is the industry as it is today -- far too many security holes, advisories, and patches.

Due to the importance of secure programming, and its great interest to readers, we revisit the topic.

This month, we'll expand on April's column, with input >from security experts Gene Spafford and Matt Bishop. This column attempts to collect the current wisdom into a complete Unix Secure Programming FAQ. We hope you'll find it useful, and that it will ease the task of writing secure code. As with all FAQs, comments, suggestions, and corrections are encouraged. Periodic updates will be published as changes warrant and interest demands.

0. Overview
1. When should these methods be applied?
2. Security design principles
3. Secure programming methods
4. Unsecure programming methods
5. Testing program security

0. Overview

Too many programs have security holes in them. In the current state of the industry, code is being released with too little testing and with little or no regard to secure programming techniques. This FAQ attempts to be a tool for programmers, easing the process of writing secure programs.

It is important to apply good programming techniques, even when your code is expected to be used in limited situations or for limited duration. Many programs are used beyond their originally planned scope. Ivan Krsul found, in studies he did for his Purdue University Ph.D. thesis, that the majority of security flaws (historically speaking) have been the result of programs operating in a different environment than the one their designer knew or imagined. For example, the programmer may have believed that certain system calls could never fail, or that the program could never be invoked using non-text arguments. Thus, one of the best things a programmer can do to code defensively is to question assumptions, thinking carefully about whether or not they are valid, and imagining conditions that might render them false.

This FAQ attempts to codify the thoughts, techniques, steps, and system calls that should be used for any program that could affect system security.

1. When should these methods be applied?

It would be nice if these secure programming methods were used on all programs. After all, most of them are simply good programming techniques, whether the program is security-related or not. However, implementing all of these techniques in every program would take more time and effort than not using them. Therefore, if it's impossible to use these methods everywhere, they should at least be used for "important" programs. Specifically, these methods should be applied to the following:

  • All setuid and setgid programs
  • All network daemons (programs that accept network connections)
  • All programs that require atomicity for security (for example, check access permission of a file and open it)
  • Programs that run with input from outside or use information obtained from the environment (for example: mail agents for users, PATH variable for spawning subprocesses)
  • Programs used for system administration

2. Security design principles

Regardless of the programming language used, the purpose of the program, and the techniques used to write it, the following principles can help assure the program is as bug-free as possible:

  1. Least privilege. Program and use the minimum sufficient privilege to accomplish the task. Ask, "What privileges does the software need?" not, "What privileges does the software want?"
  2. Economy of mechanism. Short, simple code will have fewer bugs than long, complex code. Determine the minimum necessary to do the job.
  3. Complete mediation. Check every access to an object, every return code from every call, and every variable value at a decision point.
  4. Open design. Do not depend on security through obscurity.
  5. Separation of privilege. Keep privileges necessary at different times in different routines or programs.
  6. Least common mechanism. Users should share resources as little as possible; minimize shared resources.
  7. Psychological acceptability. Security controls must be easy to use or they will be bypassed by users.
  8. Fail-safe defaults. Deny by default, and fail "closed" (without granting the request).
  9. Code reuse. Reuse previously tested code when possible.
  10. Distrust the unknown. Anything provided by users or from outside of the program is suspect.
  11. Anticipate problems before they arise. Determine what security problems may arise from the functionality of your program and design to minimize these problems before you start writing the program.

3. Secure programming methods

Implement the software following good programming practice and secure software guidelines. Appropriate information on which programming techniques, system calls, and library calls to use and avoid is not readily available. Chapter 23 of Practical Unix and Internet Security by Simson Garfinkle and Gene Spafford has quite a lot of valuable information on secure programming and unsecure programming techniques. Some of it is abstracted here.

  • Check all command-line arguments.
  • Check all system call parameters and system call return codes.
  • Check arguments passed in environment parameters and don't depend on Unix environment variables.
  • Be sure all buffers are bounded.
  • Do bounds checking on every variable before the contents are copied to a local buffer.
  • If creating a new file, use O_EXCL and O_CREAT flags to assure that the file doesn't already exist.
  • Use lstat() to make sure a file is not a link, if appropriate.
  • Use the following library calls instead of their alternatives: fgets(), strncpy(), strncat(), snprintf(). Generally speaking, use functions that check lengths (termination character check isn't enough).
  • Likewise, use execve(), carefully, if you must spawn a process.
  • Explicitly change directories (chdir()) to an appropriate directory at program start.
  • Set limit values to disable creation of a core file if the program fails: a core file could hold passwords or state information that were in memory.
  • If using temporary files, consider using tmpfile() or mktemp() system calls to create them (although most mktemp() library calls have problematic race conditions).
  • Have internal consistency-checking code.
  • Include lots of logging, including date, time, uid and effective uid, gid and effective gid, terminal information, pid, command-line arguments, errors, and originating host.
  • Make the program's critical portion as short and simple as possible.
  • Always use full pathnames for any file arguments.
  • Check user input to be sure it contains only "good" characters.
  • Make good use of tools such as lint.
  • Be aware of race conditions, including deadlock conditions and sequencing conditions.
  • Place timeouts and load-level limits on incoming network-oriented read requests.
  • Place timeouts on outgoing network-oriented write requests.
  • Use session encryption to avoid session hijacking and hide authentication information.
  • Use chroot() to set program context to a subset of the system whenever possible.
  • If possible, statically link secure programs.
  • Do reverse DNS lookups on a connection when you need a hostname.
  • Shed or limit excessive loads in network daemons.
  • Put reasonable timeout limits on network reads and writes.
  • Prevent more than one copy of a daemon from running, if appropriate.

4. Unsecure programming methods

  • Avoid routines that fail to check buffer boundaries when manipulating strings, particularly gets(), strcpy(), strcat(),sprintf(), fscanf(), scanf(), vsprintf(), realpath(), getopt(), getpass(), streadd(), strecpy(), and strtrns().
  • Likewise, avoid execlp() and execvp().
  • Never use system() and popen() system calls .
  • Do not create files in world-writable directories.
  • Generally, don't create setuid or setgid shell scripts.
  • Don't make assumptions about port numbers, instead, use getservbyname().
  • Don't assume connections from low-numbered ports are legitimate or trustworthy.
  • Don't trust any IP address; if you want authentication, use cryptography. (Reverse DNS lookup provides a minimal level of assurance.)
  • Don't require clear-text authentication information.
  • Avoid any guessable or replayable seed to random number generators.
  • Don't try to recover from a serious error; output details and terminate.
  • Bracket sections of code that require higher privilege with setuid() and setgid() functions.
  • Consider using perl -T or taintperl for writing setuid programs.

5. Testing program security

Test the software using the same methods crackers use:

  • Try to overflow every buffer in the package
  • Try to abuse command-line options
  • Try to create every race condition conceivable
  • Have someone besides the designer and implementor review and test the code
  • Read through the code, thinking like a cracker, looking for vulnerabilities

Implementation of these steps should improve the quality of software, and reduce bugs in code, especially security holes. (Be sure to check out the Resources below for more online information.)

Tools
A new release of the commercial version of tripwire is available. Details can be found at the visualcomputing home page.

Letters
In last month's column, I included an open letter asking for help in protecting the contents of a Web document hierarchy in a padded cell environment. Gene Spafford replied with the method used to protect the COAST archives. This seems like a great solution to the problem.

Peter,

I read your reply to the person asking about the document hierarchy under a padded cell server. You said you were unaware of a good solution.

Well, I have had good success with both loopback mounts and with NFS mounts into chrooted server environments (do the mounts before the chroot and they seem to be preserved).

For instance, if you do a loopback read-only mount of a filesystem to the server, it helps protect the files as well as making a limited set of the files available for export. Meanwhile, you can make the "real" files available to the maintainer via some other mechanism.

We've been maintaining the COAST archive this way for nearly four years now.

Cheers,
Gene Spafford

Gene notes that files on loopback mounted file systems aren't immutable. Sun has documented that processes with the proper privileges may modify files on loopback mounted file systems. The justification for this functionality is unknown. Therefore, while it provides another layer of protection for contents, it's not as secure as one would hope.


Resources


*The first seven of the security design principles are derived from Saltzer and Schroeder's paper The Protection of Information in Computer Systems, Proceedings of the IEEE, September 1975.

About the author
Peter Galvin is chief technologist for Corporate Technologies Inc., a systems integrator and VAR. He is also adjunct system planner for the Computer Science Department at Brown University, and has been program chair for the past four SUG/SunWorld conferences. As a consultant and trainer, he has given talks and tutorials worldwide on the topics of system administration and security. He has written articles for Byte and Advanced Systems (SunWorld) magazines, and the newsletter Superuser. Peter is co-author of the best-selling Operating Systems Concepts textbook. Reach Peter at [email protected]

Hosted by www.Geocities.ws

1