Arrow BOFHcam Menu
Arrow Camera I
Arrow Camera II
Arrow Copyleft
Current
Designs

Arrow
Assembling
Etherkillers
Black Operations in the
Corporate IT Theatre
BOFH
in a Nutshell
Distributing Clue
to Users
LART
Pocket Reference
Practical UNIX
Terrorism
Snooping Email
for Fun and Profit
Tracing Spammers
Windows NT's
Infernal Filesystem
Windows NT
User Obliteration
Why You Can't Find
Your UNIX System Administrator
Writing Word
Macro Viruses
Special
Interest

Arrow
Ask BOFH
This month's Journal
Other Writing
O'Really



O'Really T-shirts

Practical UNIX Terrorism

Full Description

What can be said that you don't know already? In case you're completely in the dark, try some of these old, but trusted ideas I found after a brief search:

Security holes manifest themselves in (broadly) four ways:


Physical Security Holes

Where the potential problem is caused by giving unauthorised persons physical access to the machine, where this might allow them to perform things that they shouldn't be able to do.

A good example of this would be a public workstation room where it would be trivial for a user to reboot a machine into single-user mode and muck around with the workstation filestore, if precautions are not taken.

Another example of this is the need to restrict access to confidential backup tapes, which may (otherwise) be read by any user with access to the tapes and a tape drive, whether they are meant to have permission or not.


Software Security Holes

Where the problem is caused by badly written items of "privileged" software (daemons, cronjobs) which can be compromised into doing things which they shouldn't oughta.
The most famous example of this is the "sendmail debug" hole (see bibliography) which would enable a cracker to bootstrap a "root" shell. This could be used to delete your filestore, create a new account, copy your password file, anything.
(Contrary to popular opinion, crack attacks via sendmail were not just restricted to the infamous "Internet Worm" - any cracker could do this by using "telnet" to port 25 on the target machine. The story behind a similar hole (this time in the EMACS "move-mail" software) is described in [Stoll].)
New holes like this appear all the time, and your best hopes are to:

A) try to structure your system so that as little software as possible runs with root/daemon/bin privileges, and that which does is known to be robust.

B) subscribe to a mailing list which can get details of problems and/or fixes out to you as quickly as possible, and then ACT when you receive information.

C) When installing/upgrading a given system, try to install/enable only those software packages for which you have an immediate or foreseeable need. Many packages include daemons or utilities which can reveal information to outsiders. For instance, Redhat installs and starts sendmail of all things whether you liuke it or not. Many TCP/IP packages automatically install/run programs such as rwhod, fingerd, and (occasionally) tftpd, all of which can present security problems.
Careful system administration is the solution. Most of these programs are initialized/started at boot time; you may wish to modify your boot scripts (usually in the /etc, /etc/rc, /etc/rcX.d directories) to pre- vent their execution. You may wish to remove some utilities completely. For some utilities, a simple chmod(1) can prevent access from unauthorized users.

In summary, DON'T TRUST INSTALLATION SCRIPTS/PROGRAMS! Such facilities tend to install/run everything in the package without asking you. Most installation documentation includes lists of "the programs included in this package"; be sure to review it.


Incompatible Usage Security Holes

Where, through lack of experience, or no fault of his/her own, the System Manager assembles a combination of hardware and software which when used as a system is seriously flawed from a security point of view. It is the incompatibility of trying to do two unconnected but useful things which creates the security hole.

Problems like this are a pain to find once a system is set up and running, so it is better to build your system with them in mind. It's never too late to have a rethink, though.

Some examples are detailed below; let's not go into them here, it would only spoil the surprise.


Choosing a suitable security philosophy and maintaining it

The fourth kind of security problem is one of perception and understanding. Perfect software, protected hardware, and compatible components don't work unless you have selected an appropriate security policy and turned on the parts of your system that enforce it. Having the best password mechanism in the world is worthless if your users think that their login name backwards is a good password! Security is relative to a policy (or set of policies) and the operation of a system in conformance with that policy.

Specific Flaws to Check For:

1) Look for routines that don't do boundary checking, or verify input. ie: the gets() family of routines, where it is possible to overwrite buffer boundaries. ( sprintf()?, gets(), etc. ) also: strcpy() which is why most src has:
#define SCYPYN((a)(b)) strcpy(a, b, sizeof(a))

2) SUID/SGID routines written in one of the shells, instead of C or Perl.

3) SUID/SGID routines written in Perl that don't use the "taintperl" program.)

4) SUID/SGID routines that use the system(), popen(), execlp(), or execvp() calls to run something else.

5) Any program that uses relative path names inside the program.

6) The use of relative path names to specify dynamically linked libraries. (look in Makefile).

7) Routines that don't check error return codes from system calls. (ie: fork(2), suid(2), etc), setuid() rather, as in the famous rcp bug

8) Holes can often be found in code that:
A) is ported to a new environment.
B) receives unexpected input.
C) interacts with other local software.
D) accesses system files like passwd, L.sys, etc.
E) reads input from a publicly writable file/directory.
F) diagnostic programs which are typically not user-proofed.

9) Test code for unexpected input. Coverage, data flow, and mutation testing tools are available.

10) Look in man pages, and users guides for warnings against doing X, and try variations of X. Ditto for "bugs" section.

11) Look for seldom used, or unusual functions or commands - read backwards. In particular looking for undocumented flags/arguments may prove useful. Check flags that were in prior releases, or in other OS versions. Check for options that other programs might use. For instance telnet uses -h option to login ...
right, as most login.c's I've seen have:
          if((getuid()) && hflag){
                 syslog()
                 exit()
                 }
12) Look for race conditions.

13) Failure of software to authenticate that it is really communicating with the desired software or hardware module it wants to be accessing.

14) Lack or error detection to reset protection mechanisms following an error.

15) Poor implementation resulting in, for example, condition codes being improperly tested.

16) Implicit trust: Routine B assumes routine A's parameters are correct because routine A is a system process.

17) System stores its data or references user parameters in the users address space.

18) Inter-process communication: return conditions (passwd OK, illegal parameter, segment error, etc) can provide a significant wedge, esp. when combined with (17).

19) User parameters may not be adequately checked.

20) Addresses that overlap or refer to system areas.

21) Condition code checks may be omitted.

22) Failure to anticipate unusual or extraordinary parameters.

23) Look for system levels where the modules involved were written by different programmers, or groups of programmers - holes are likely to be found.

24) Registers that point to the location of a parameters value instead of passing the value itself.

25) Any program running with system privileges. (too many progs are given uid 0, to facilitate access to certain tables, etc.)

26) Group or world readable temporary files, buffers, etc.

27) Lack of threshold values, and lack of logging/notification once these have been triggered.

28) Changing parameters of critical system areas prior to their execution by a concurrent process. (race conditions)

29) Inadequate boundary checking at compile time, for example, a user may be able to execute machine code disguised as data in a data area. (if text and data areas are shared)

30) Improperly handling user generated asynchronous interrupts. Users interrupting a process, performing an operation, and either returning to continue the process or begin another will frequently leave the system in an unprotected state. Partially written files are left open, improper writing of protection infraction messages, improper setting of protection bits, etc often occur.

31) Code that uses fopen(3) without setting the umask. ( eg: at(1), etc. ) In general, code that does not reset the real and effective uid before forking.

32) Trace is your friend (or truss in SVR4) for helping figure out what system calls a program is using.

33) Scan /usr/local fs's closely. Many admins will install software from the net. Often you'll find tcpdump, top, nfswatch, ... suid'd root for their ease of use.

34) Check suid programs to see if they are the ones originally put on the system. Admins will sometimes put in a passwd replacement which is less secure than the distributed version.

35) Look for programs that were there to install software or loadable kernel modules.

36) Dynamically linked programs in general. Remember LD_PRELOAD, I think that was the variable.

37) I/O channel programming is a prime target. Look for logical errors, inconsistencies, and omissions.

38) See if it's possible for a I/O channel program to modify itself, loop back, and then execute the newly modified code. (instruction pre-load may screw this up)

39) If I/O channels act as independent processors they may have unlimited access to memory, thus system code may be modified in memory prior to execution.

40) Look for bugs requiring flaws in multiple pieces of software, i.e. say program a can be used to change config file /etc/a now program b assumes the information in a to be correct and this leads to unexpected results (just look at how many programs trust /etc/utmp)

41) Any program, especially those suid/sgid, that allow shell escapes.

Anything more would just be taking the fun out of it.

Return to Practical UNIX Terrorism

BOFHcam Home | O'Really T-shirts | How to Order | BOFHcam Contacts
O'Reilly Inc. | About BOFHcam | Approved sites

Not associated with O'Reilly & Associates, Inc. © 2000-2020