Almost Everything You Ever Wanted To Know About Security*
*(but were afraid to ask!)
This document is meant to answer some of the questions which regularly
appear in the Usenet newsgroups "comp.security.misc" and "alt.security",
and is meant to provide some background to the subject for newcomers to
that newsgroup.
This FAQ is maintained by Alec Muffett (aem@aber.ac.uk, uknet!aber!aem),
with contributions from numerous others [perhaps]. The views expressed
in the document are the personal views of the author(s), and it should
not be inferred that they are necessarily shared by anyone with whom the
author(s) are now, or ever may be, associated.
Many thanks go to (in no particular order): Steve Bellovin, Matt Bishop,
Mark Brader, Ed DeHart, Dave Hayes, Jeffrey Hutzelman, William LeFebvre,
Wes Morgan, Rob Quinn, Chip Rosenthal, Wietse Venema, Gene Spafford,
John Wack and Randall Atkinson.
Disclaimer: Every attempt is made to ensure that the information
contained in this FAQ is up to date and accurate, but no responsibility
will be accepted for actions resulting from information gained herein.
Questions which this document addresses:
Q.1 What are alt.security and comp.security.misc for?
Q.2 Whats the difference between a hacker and a cracker?
Q.3 What is "security through obscurity"
Q.4 What makes a system insecure?
Q.5 What tools are there to aid security?
Q.6 Isn't it dangerous to give cracking tools to everyone?
Q.7 Where can I get these tools?
Q.8 Why and how do systems get broken into?
Q.9 Who can I contact if I get broken into?
Q.10 What is a firewall?
Q.11 Why shouldn't I use setuid shell scripts?
Q.12 Why shouldn't I leave "root" permanently logged on the console?
Q.13 Why shouldn't I create Unix accounts with null passwords?
Q.14 What security holes are associated with X-windows (and other WMs)?
Q.15 What security holes are associated with NFS?
Q.16 How can I generate safe passwords?
Q.17 Why are passwords so important?
Q.18 How many possible passwords are there?
Q.19 Where can I get more information?
Q.20 How silly can people get?
---------------------------------------------------------------------------
Q.1 What are alt.security and comp.security.misc for?
Comp.security.misc is a forum for the discussion of computer security,
especially those relating to Unix (and Unix like) operating systems.
Alt.security used to be the main newsgroup covering this topic, as well
as other issues such as car locks and alarm systems, but with the
creation of comp.security.misc, this may change.
This FAQ will concentrate wholly upon computer related security issues.
The discussions posted range from the likes of "What's such-and-such
system like?" and "What is the best software I can use to do so-and-so"
to "How shall we fix this particular bug?", although there is often a
low signal to noise ratio in the newsgroup (a problem which this FAQ
hopes to address).
The most common flamewars start when an apparent security novice posts a
message saying "Can someone explain how the such-and-such security hole
works?" and s/he is immediately leapt upon by a group of self appointed
people who crucify the person for asking such an "unsound" question in a
public place, and flame him/her for "obviously" being a cr/hacker.
Please remember that grilling someone over a high flame on the grounds
that they are "a possible cr/hacker" does nothing more than generate a
lot of bad feeling. If computer security issues are to be dealt with in
an effective manner, the campaigns must be brought (to a large extent)
into the open.
Implementing computer security can turn ordinary people into rampaging
paranoiacs, unable to act reasonably when faced with a new situation.
Such people take an adversarial attitude to the rest of the human race,
and if someone like this is in charge of a system, users will rapidly
find their machine becoming more restrictive and less friendly (fun?) to
use.
This can lead to embarrasing situations, eg: (in one university) banning
a head of department from the college mainframe for using a network
utility that he wasn't expected to. This apparently required a lot of
explaining to an unsympathetic committee to get sorted out.
A more sensible approach is to secure a system according to its needs,
and if its needs are great enough, isolate it completely. Please, don't
lose your sanity to the cause of computer security; it's not worth it.
Q.2 What's the difference between a hacker and a cracker?
Lets get this question out of the way right now:
On USENET, calling someone a "cracker" is an unambiguous statement that
some person persistently gets his/her kicks from breaking from into
other peoples computer systems, for a variety of reasons. S/He may pose
some weak justification for doing this, usually along the lines of
"because it's possible", but most probably does it for the "buzz" of
doing something which is illicit/illegal, and to gain status amongst a
peer group.
Particularly antisocial crackers have a vandalistic streak, and delete
filestores, crash machines, and trash running processes in pursuit of
their "kicks".
The term is also widely used to describe a person who breaks copy
protection software in microcomputer applications software in order to
keep or distribute free copies.
On USENET, calling someone a "hacker" is usually a statement that said
person holds a great deal of knowledge and expertise in the field of
computing, and is someone who is capable of exercising this expertise
with great finesse. For a more detailed definition, readers are
referred to the Jargon File [Raymond].
In the "real world", various media people have taken the word "hacker"
and coerced it into meaning the same as "cracker" - this usage
occasionally appears on USENET, with disastrous and confusing results.
Posters to the security newsgroups should note that they currently risk
a great deal of flamage if they use the word "hacker" in place of
"cracker" in their articles.
NB: nowhere in the above do I say that crackers cannot be true hackers.
It's just that I don't say that they are...
Q.3 What is "security through obscurity"
Security Through Obscurity (STO) is the belief that a system of any sort
can be secure so long as nobody outside of its implementation group is
allowed to find out anything about its internal mechanisms. Hiding
account passwords in binary files or scripts with the presumption that
"nobody will ever find it" is a prime case of STO.
STO is a philosophy favoured by many bureaucratic agencies (military,
governmental, and industrial), and it used to be a major method of
providing "pseudosecurity" in computing systems.
Its usefulness has declined in the computing world with the rise of open
systems, networking, greater understanding of programming techniques, as
well as the increase in computing power available to the average person.
The basis of STO has always been to run your system on a "need to know"
basis. If a person doesn't know how to do something which could impact
system security, then s/he isn't dangerous.
Admittedly, this is sound in theory, but it can tie you into trusting a
small group of people for as long as they live. If your employees get
an offer of better pay from somewhere else, the knowledge goes with
them, whether the knowledge is replaceable or not. Once the secret gets
out, that is the end of your security.
Nowadays there is also a greater need for the ordinary user to know
details of how your system works than ever before, and STO falls down a
as a result. Many users today have advanced knowledge of how their
operating system works, and because of their experience will be able to
guess at the bits of knowledge that they didn't "need to know". This
bypasses the whole basis of STO, and makes your security useless.
Hence there is now a need is to to create systems which attempt to be
algorithmically secure (Kerberos, Secure RPC), rather than just
philosophically secure. So long as your starting criteria can be met,
your system is LOGICALLY secure.
"Shadow Passwords" (below) are sometimes dismissed as STO, but this is
incorrect, since (strictly) STO depends on restricting access to an
algorithm or technique, whereas shadow passwords provide security by
restricting access to vital data.
Q.4 What makes a system insecure?
Switching it on. The adage usually quoted runs along these lines:
"The only system which is truly secure is one which is switched off
and unplugged, locked in a titanium lined safe, buried in a concrete
bunker, and is surrounded by nerve gas and very highly paid armed
guards. Even then, I wouldn't stake my life on it."
(the original version of this is attributed to Gene Spafford)
A system is only as secure as the people who can get at it. It can be
"totally" secure without any protection at all, so long as its continued
good operation is important to everyone who can get at it, assuming all
those people are responsible, and regular backups are made in case of
hardware problems. Many laboratory PC's quite merrily tick away the
hours like this.
The problems arise when a need (such as confidentiality) has to be
fulfilled. Once you start putting the locks on a system, it is fairly
likely that you will never stop.
Security holes manifest themselves in (broadly) four ways:
1) Physical Security Holes.
- Where the potential problem is caused by giving unauthorised persons
physical access to the machine, where this might allow them to perform
things that they shouldn't be able to do.
A good example of this would be a public workstation room where it would
be trivial for a user to reboot a machine into single-user mode and muck
around with the workstation filestore, if precautions are not taken.
Another example of this is the need to restrict access to confidential
backup tapes, which may (otherwise) be read by any user with access to
the tapes and a tape drive, whether they are meant to have permission or
not.
2) Software Security Holes
- Where the problem is caused by badly written items of "privledged"
software (daemons, cronjobs) which can be compromised into doing things
which they shouldn't oughta.
The most famous example of this is the "sendmail debug" hole (see
bibliography) which would enable a cracker to bootstrap a "root" shell.
This could be used to delete your filestore, create a new account, copy
your password file, anything.
(Contrary to popular opinion, crack attacks via sendmail were not just
restricted to the infamous "Internet Worm" - any cracker could do this
by using "telnet" to port 25 on the target machine. The story behind a
similar hole (this time in EMACS) is described in [Stoll].)
New holes like this appear all the time, and your best hopes are to:
a: try to structure your system so that as little software as possible
runs with root/daemon/bin privileges, and that which does is known to
be robust.
b: subscribe to a mailing list which can get details of problems
and/or fixes out to you as quickly as possible, and then ACT when you
receive information.
3) Incompatible Usage Security Holes
- Where, through lack of experience, or no fault of his/her own, the
System Manager assembles a combination of hardware and software which
when used as a system is seriously flawed from a security point of view.
It is the incompatibility of trying to do two unconnected but useful
things which creates the security hole.
Problems like this are a pain to find once a system is set up and
running, so it is better to build your system with them in mind. It's
never too late to have a rethink, though.
Some examples are detailed below; let's not go into them here, it would
only spoil the surprise.
4) Choosing a suitable security philosophy and maintaining it.
>From: Gene Spafford <spaf@cs.purdue.edu>
>The fourth kind of security problem is one of perception and
>understanding. Perfect software, protected hardware, and compatible
>components don't work unless you have selected an appropriate security
>policy and turned on the parts of your system that enforce it. Having
>the best password mechanism in the world is worthless if your users
>think that their login name backwards is a good password! Security is
>relative to a policy (or set of policies) and the operation of a system
>in conformance with that policy.
Q.5 What tools are there to aid security?
1) "COPS"
Managed by Dan Farmer, this is a long established suite of shell scripts
which forms an extensive security testing system; There is a rudimentary
password cracker, and routines to check the filestore for suspicious
changes in setuid programs, others to check permissions of essential
system and user files, and still more to see whether any system software
behaves in a way which could cause problems.
The software comes in two versions - one written in Perl and one
(largely equivalent) written in shell scripts. The latest version is
very up-to-date on Unix Security holes.
2) "Crack" (+ "UFC").
Written by Alec Muffett, this is a program written with one purpose in
mind: to break insecure passwords. It is probably the most efficent and
friendly password cracker that is publically available, with the ability
to let the user to specify precisely how to form the words to use as
guesses at users passwords.
It also has an inbuilt networking capability, allowing the load of
cracking to be spread over as many machines as are available on a
network, and it is supplied with an optimised version of the Unix crypt()
algorithm.
An even faster version of the crypt() algorithm, "UFC" by Michael Glad,
is freely available on the network, and the latest versions of UFC and
Crack are compatible and can be easily hooked together.
3) NPasswd (Clyde Hoover) & Passwd+ (Matt Bishop)
These programs are written to redress the balance in the password
cracking war. They provide replacements for the standard "passwd"
command, but prevent a user from selecting passwords which are easily
compromised by programs like Crack.
Several versions of these programs are available on the network, hacked
about to varying degrees in order to provide compatibility for System V
based systems, NIS/YP, shadow password schemes, etc. The usual term for
this type of program is a 'fascist' password program.
4) "Shadow" - a Shadow Password Suite
This program suite (by John F Haugh II) is a set of program and function
replacements (compatible with most Unixes) which implements shadow
passwords, ie: a system where the plaintext of the password file is
hidden from all users except root, hopefully stopping all password
cracking attempts at source. In combination with a fascist passwd
frontend, it should provide a good degree of password file robustness.
>From: jfh@rpp386.lonestar.org (John F. Haugh II)
>Shadow does much more than hide passwords. It also provides for
>terminal access control, user and group administration, and a few
>other things which I've forgotten. There are a dozen or more
>commands in the suite, plus a whole slew of library functions.
5) TCP Wrappers (Wietse Venema)
These are programs which provide a front-end filter to many of the
network services which Unix provides by default. If installed, they can
curb otherwise unrestricted access to potential dangers like incoming
FTP/TFTP, Telnet, etc, and can provide extra logging information, which
may be of use if it appears that someone is trying to break in.
6) SecureLib
>From: phil@pex.eecs.nwu.edu (William LeFebvre)
>You may want to add a mention of securelib, a security enhancer
>available for SunOS version 4.1 and higher.
>Securelib contains replacement routines for three kernel calls:
>accept(), recvfrom(), recvmsg(). These replacements are compatible with
>the originals, with the additional functionality that they check the
>Internet address of the machine initiating the connection to make sure
>that it is "allowed" to connect. A configuration file defines what
>hosts are allowed for a given program. Once these replacement routines
>are compiled, they can be used when building a new shared libc library.
>The resulting libc.so can then be put in a special place. Any program
>that should be protected can then be started with an alternate
>LD_LIBRARY_PATH.
7) SPI
>From: Gene Spafford <spaf@cs.purdue.edu>
>Sites connected with the Department of Energy and some military
>organizations may also have access to the SPI package. Interested (and
>qualified) users should contact the CIAC at LLNL for details.
>SPI is a screen-based administrator's tool that checks configuration
>options, includes a file-change (integrity) checker to monitor for
>backdoors and viruses, and various other security checks. Future
>versions will probably integrate COPS into the package. It is not
>available to the general public, but it is available to US Dept of
>Energy contractors and sites and to some US military sites. A version
>does or will exist for VMS, too. Further information on availabilty can
>be had from the folks at the DoE CIAC.
Q.6 Isn't it dangerous to give cracking tools to everyone?
That depends on your point of view. Some people have complained that
giving unrestricted public access to programs like COPS and Crack is
irresponsible because the "baddies" can get at them easily.
Alternatively, you may believe that the really bad "baddies" have had
programs like this for years, and that it's really a stupendously good
idea to give these programs to the good guys too, so that they may check
the integrity of their system before the baddies get to them.
So, who wins more from having these programs freely available? The good
guys or the bad ? You decide, but remember that less honest tools than
COPS and Crack tools were already out there, and most of the good guys
didn't have anything to help.
Q.7 Where can I get these tools?
COPS:
V1.04, available for FTP from cert.sei.cmu.edu in pub/cops and
archive.cis.ohio-state.edu in pub/cops.
Crack/UFC:
Crack v4.1f and UFC Patchlevel 1. Available from any major USENET
archive (eg: ftp.uu.net) in volume 28 of comp.sources.misc.
NPasswd:
Currently suffering from being hacked about by many different people.
Version 2.0 is in the offing, but many versions exist in many
different configurations. Will chase this up with authors - AEM
Passwd+:
"alpha version, update 3" - beta version due soon. Available from
dartmouth.edu as pub/passwd+.tar.Z
Shadow:
This is available from the comp.sources.misc directory at any major
USENET archive (see entry for Crack)
TCP Wrappers:
Available for anonymous FTP:
cert.sei.cmu.edu: pub/network_tools/tcp_wrapper.shar
ftp.win.tue.nl: pub/security/log_tcp.shar.Z
Securelib:
The latest version of securelib is available via anonymous FTP from the
host "eecs.nwu.edu". It is stored in the file "pub/securelib.tar".
Q.8 Why and how do systems get broken into?
This is hard to answer definitively. Many systems which crackers break
into are only used as a means of entry into yet more systems; by hopping
between many machines before breaking into a new one, the cracker hopes
to confuse any possible pursuers and put them off the scent. There is
an advantage to be gained in breaking into as many different sites as
possible, in order to "launder" your connections.
Another reason may be psychological: some people love to play with
computers and stretch them to the limits of their capabilities.
Some crackers might think that it's "really neat" to hop over 6 Internet
machines, 2 gateways and an X.25 network just to knock on the doors of
some really famous company or institution (eg: NASA, CERN, AT+T, UCB).
Think of it as inter-network sightseeing.
This view is certainly appealing to some crackers, and certainly leads
to both the addiction and self-perpetuation of cracking.
As to the "How" of the question, this is again a very sketchy area. In
universities, it is extremely common for computer account to be passed
back and forth between undergraduates:
"Mary gives her account password to her boyfriend Bert at another
site, who has a friend Joe who "plays around on the networks". Joe
finds other crackable accounts at Marys site, and passes them around
amongst his friends..." pretty soon, a whole society of crackers is
playing around on the machines that Mary uses.
This sort of thing happens all the time, and not just in universities.
One solution is in education. Do not let your users develop attitudes
like this one:
"It doesn't matter what password I use on _MY_ account,
after all, I only use it for laserprinting..."
- an Aberystwyth Law student, 1991
Teach them that use of the computer is a group responsibility. Make
sure that they understand that a chain is only as strong as it's weak
link.
Finally, when you're certain that they understand your problems as a
systems manager and that they totally sympathise with you, configure
your system in such a way that they can't possibly get it wrong.
Believe in user education, but don't trust to it alone.
Q.9 Who can I contact if I get broken into?
If you're connected to the Internet, you should certainly get in touch
with CERT, the Computer Emergency Response Team.
To quote the official blurb:
>From: Ed DeHart
> The Computer Emergency Response Team (CERT) was formed by the Defense
> Advanced Research Projects Agency (DARPA) in 1988 to serve as a focal
> point for the computer security concerns of Internet users. The
> Coordination Center for the CERT is located at the Software Engineering
> Institute, Carnegie Mellon University, Pittsburgh, PA.
> Internet E-mail: cert@cert.sei.cmu.edu
> Telephone: 412-268-7090 24-hour hotline:
> CERT/CC personnel answer 7:30a.m. to 6:00p.m. EST(GMT-5)/EDT(GMT-4),
> and are on call for emergencies during other hours.
...and also, the umbrella group "FIRST", which mediates between the
incident handling teams themselves...
>From: John Wack <wack@csrc.ncsl.nist.gov>
>[...] FIRST is actually a very viable and growing
>organization, of which CERT is a member. It's not actually true that,
>if you're connected to the Internet, you should call CERT only - that
>doesn't do justice to the many other response teams out there and in the
>process of forming.
>NIST is currently the FIRST secretariat; we maintain an anonymous ftp
>server with a directory of FIRST information (csrc.ncsl.nist.gov:
>~/pub/first). This directory contains a contact file that lists the
>current members and their constituencies and contact information
>(filename "first-contacts").
>While CERT is a great organization, other response teams who do handle
>incidents on their parts of the Internet merit some mention as well -
>perhaps mentioning the existence of this file would help to do that in a
>limited space.
The file mentioned is a comprehensive listing of contact points per
network for security incidents. It is too large to reproduce here, I
suggest that the reader obtains a copy for his/her self by the means
given.
Q.10 What is a firewall?
A (Internet) firewall is a machine which is attached (usually) between
your site and a wide area network. It provides controllable filtering
of network traffic, allowing restricted access to certain internet port
numbers (ie: services that your machine would otherwise provide to the
network as a whole) and blocks access to pretty well everything else.
Similar machines are available for other network types, too.
Firewalls are an effective "all-or-nothing" approach to dealing with
external access security, and they are becoming very popular, with the
rise in Internet connectivity.
For more information on these sort of topics, see the Gateway paper by
[Cheswick], below.
Q.11 Why shouldn't I use setuid shell scripts?
You shouldn't use them for a variety of reasons, mostly involving bugs
in the Unix kernel. Here are a few of the more well known problems,
some of which are fixed on more recent operating systems.
1) If the script begins "#!/bin/sh" and a link (symbolic or otherwise)
can be made to it with the name "-i", a setuid shell can be immediately
obtained because the script will be invoked: "#!/bin/sh -i", ie: an
interactive shell.
2) Many kernels suffer from a race condition which can allow you to
exchange the shellscript for another executable of your choice between
the times that the newly exec()ed process goes setuid, and when the
command interpreter gets started up. If you are persistent enough, in
theory you could get the kernel to run any program you want.
3) The IFS bug: the IFS shell variable contains a list of characters to
be treated like whitespace by a shell when parsing command names. By
changing the IFS variable to contain the "/" character, the command
"/bin/true" becomes "bin true".
All you need do is export the modified IFS variable, install a command
called "bin" in your path, and run a setuid script which calls
"/bin/true". Then "bin" will be executed whilst setuid.
If you really must write scripts to be setuid, either
a) Put a setuid wrapper in "C" around the script, being very careful
to reset IFS and PATH to something sensible before exec()ing the
script. If your system has runtime linked libraries, consider the
values of the LD_LIBRARY_PATH also.
b) Use a scripting language like Perl which has a safe setuid
facility, and is proactively rabid about security.
- but really, it's safest not to use setuid scripts at all.
Q.12 Why shouldn't I leave "root" permanently logged on the console?
Using a 'smart' terminal as console and leaving "/dev/console" world
writable whilst "root" is logged in is a potential hole. The terminal
may be vulnerable to remote control via escape sequences, and can be
used to 'type' things into the root shell. The terminal type can
usually be obtained via the "ps" command.
Various solutions to this can be devised, usually by giving the console
owner and group-write access only , and then using the setgid mechanism
on any program which has need to output to the console (eg: "write").
Q.13 Why shouldn't I create Unix accounts with null passwords?
Creating an unpassworded account to serve any purpose is potentially
dangerous, not for any direct reason, but because it can give a cracker
a toehold.
For example, on many systems you will find a unpassworded user "sync",
which allows the sysman to sync the disks without being logged in. This
appears to be both safe and innocuous.
The problem with this arises if your system is one of the many which
doesn't do checks on a user before authorising them for (say) FTP. A
cracker might be able to connect to your machine for one of a variety of
FTP methods, pretending to be user "sync" with no password, and then
copy your password file off for remote cracking.
Although there are mechanisms to prevent this sort of thing happening in
most modern vesions of Unix, to be totally secure requires an in-depth
knowledge of every package on your system, and how it deals with the
verification of users. If you can't be sure, it's probably better not
to leave holes like this around.
Another hole that having null-password accounts opens up is the
possibility (on systems with runtime linked libraries) of spoofing
system software into running your programs as the "sync" user, by
changing the LD_LIBRARY_PATH variable to a library of your own devising,
and running "login -p" or "su" to turn into that user.
Q.14 What security holes are associated with X-windows (and other WMs)?
Lots, some which affect use of X only, and some which impact the
security of the entire host system.
I would prefer not to go into too much detail here, and would refer any
reader reader looking for detailed information to the other FAQ's in
relevant newsgroups. (comp.windows.*)
One point I will make is that X is one of those packages which often
generates "Incompatible Usage" security problems, for instance the
ability for crackers to run xsessions on hosts under accounts with no
password (eg: sync), if it is improperly set up. Read the question
about unpassworded accounts in this FAQ.
Q.15 What security holes are associated with NFS?
Lots, mostly to do with who you export your disks to, and how. The
security of NFS relies heavily upon who is allowed to mount the files
that a server exports, and whether they are exported read only or not.
The exact format for specifying which hosts can mount an exported
directory varies between Unix implementations, but generally the
information is contained within the file "/etc/exports".
This file contains a list of directories and for each one, it has a
series of either specific "hosts" or "netgroups" which are allowed to
NFS mount that directory. This list is called the "access list".
The "hosts" are individual machines, whilst "netgroups" are combinations
of hosts and usernames specified in "/etc/netgroup". These are meant to
provide a method of finetuning access. Read the relevant manual page
for more information about netgroups.
The exports file also contains information about whether the directory
is to be exported as read-only, read-write, and whether super-user
access is to be allowed from clients which mount that directory.
The important point to remember is that if the access list for a
particular directory in /etc/exports contains:
1) <nothing>
Your directory can be mounted by anyone, anywhere.
2) <a specific hostname>
Your directory can be mounted by anyone permitted to run the mount
command at hostname. This might not be a trustworthy person; for
instance, if the machine is a PC running NFS, it could be anyone.
3) <a netgroup name>
If the netgroup:
a) is empty, anyone can mount your directory, from anywhere.
b) contains "(,,)", anyone can mount your directory, from anywhere.
c) contains the name of a netgroup which is empty or contains "(,,)",
anyone can mount your directory, from anywhere.
d) contains "(hostname,,)", anyone on the named host who is permissioned
to mount files can mount your directory.
e) contains "(,username,)", the named user can mount your directory,
from anywhere.
4) <a word which is neither a hostname or a netgroup>
If you meant to export the directory to the host "athena" but actually
type "ahtena", the word "ahtena" is taken as a netgroup name, is found
to be an empty netgroup, and thus the directory can be mounted by
anyone, anywhere.
So, if you aren't careful about what you put into /etc/exports and
/etc/netgroup you could find that a user with a PC could
a) mount your mainframe filestore as a network disk
b) edit your /etc/passwd or .rhosts or /etc/hosts.equiv ...
c) log into your mainframe as another user, possibly "root"
Disclaimer: The above information may not be true for all platforms
which provide an NFS serving capability, but is true for all of the ones
in my experience (AEM). It should be noted that the SAFE way to create
an "empty" netgroup entry is:
ngname (-,-,-)
Which is a netgroup which matches no-one on no-host on no-NIS-domain.
[ I am STILL working on PC NFS packages / ethics at the moment - AEM ]
Q.16 How can I generate safe passwords?
You can't. The key word here is GENERATE. Once an algorithm for
creating passwords is specified using upon some systematic method, it
merely becomes a matter of analysing your algorithm in order to find
every password on your system.
Unless the algorithm is very subtle, it will probably suffer from a very
low period (ie: it will soon start to repeat itself) so that either:
a) a cracker can try out every possible output of the password
generator on every user of the system, or
b) the cracker can analyse the output of the password program,
determine the algorithm being used, and apply the algorithm to other
users to determine their passwords.
A beautiful example of this (where it was disastrously assumed that a
random number generator could generate an infinite number of random
passwords) is detailed in [Morris & Thompson].
The only way to get a reasonable amount of variety in your passwords
(I'm afraid) is to make them up. Work out some flexible method of your
own which is NOT based upon:
1) modifying any part of your name or name+initials
2) modifying a dictionary word
3) acronyms
4) any systematic, well-adhered-to algorithm whatsoever
For instance, NEVER use passwords like:
alec7 - it's based on the users name (& it's too short anyway)
tteffum - based on the users name again
gillian - girlfiends name (in a dictionary)
naillig - ditto, backwards
PORSCHE911 - it's in a dictionary
12345678 - it's in a dictionary (& people can watch you type it easily)
qwertyui - ...ditto...
abcxyz - ...ditto...
0ooooooo - ...ditto...
Computer - just because it's capitalised doesn't make it safe
wombat6 - ditto for appending some random character
6wombat - ditto for prepending some random character
merde3 - even for french words...
mr.spock - it's in a sci-fi dictionary
zeolite - it's in a geological dictionary
ze0lite - corrupted version of a word in a geological dictionary
ze0l1te - ...ditto...
Z30L1T3 - ...ditto...
I hope that these examples emphasise that ANY password derived from ANY
dictionary word (or personal information), modified in ANY way,
constitutes a potentially guessable password.
For more detailed information in the same vein, you should read the
APPENDIX files which accompany Crack [Muffett].
Q.17 Why are passwords so important?
Because they are the first line of defence against interactive attacks
on your system. It can be stated simply: if a cracker cannot interact
with your system(s), and he has no access to read or write the
information contained in the password file, then he has almost no
avenues of attack left open to break your system.
This is also why, if a cracker can at least read your password file (and
if you are on a vanilla modern Unix, you should assume this) it is so
important that he is not able to break any of the passwords contained
therein. If he can, then it is also fair to assume that he can (a) log
on to your system and can then (b) break into "root" via an operating
system hole.
Q.18 How many possible passwords are there?
Most people ask this at one time or another, worried that programs like
Crack will eventually grow in power until they can do a completely
exhaustive search of all possible passwords, to break into a specific
users' account - usually root.
If (to simplify the maths) we make the assumptions that:
1) Valid passwords are created from a set of 62 chars [A-Za-z0-9]
2) Valid passwords are to be between 5 and 8 chars long
Then the size of the set of all valid passwords is: (in base 62)
100000 +
1000000 +
10000000 +
100000000 =
---------
111100000 (base 62)
A figure which is far too large to usefully undertake an exhaustive
search with current technologies. Don't forget, however, that passwords
CAN be made up with even more characters then this; you can use <space>,
all the punctuation characters, and symbols (~<>|\#$%^&*) too. If you
can use some of all the 95 non-control characters in passwords, this
increases the search space for a cracker to cover even further.
However, it's still MUCH more efficient for a cracker to get a copy of
"Crack", break into ANY account on the system (you only need one), log
onto the machine, and spoof his way up to root priviledges via operating
systems holes.
Take comfort from these figures. If you can slam the door in the face
of a potential crackers with a robust password file, you have sealed
most of the major avenues of attack immediately.
Q.19 Where can I get more information?
Books:
[Kochan & Wood]
Unix System Security
A little dated for modern matters, but still a very good book on the
basics of Unix security.
[Spafford & Garfinkel]
Practical Unix Security
This wonderful book is a worthy successor to the above, and covers a
wide variety of the topics which the Unix (and some non Unix) system
manager of the 90's will come across.
>From: Gene Spafford <spaf@cs.purdue.edu>
>Mention appendix E in "Practical Unix Security."
Okay: Appendix E contains an extensive bibliography with even more
pointers to security books than this FAQ contains.
[Stoll]
The Cuckoo's Egg
A real life 1980's thriller detailing the tracing of a cracker from
Berkeley across the USA and over the Atlantic to Germany. An excellent
view from all points: a good read, informative about security, funny,
and a good illustration of the cracker psyche. Contains an excellent
recipie for chocolate chip cookies.
A videotape of the "NOVA" (PBS's Science Program on TV) episode that
explained/reenacted this story is available from PBS Home Video. They
have a toll-free 800 number within North America.
I believe that this program was aired on the BBC's "HORIZON" program,
and thus will be available from BBC Enterprises, but I haven't checked
this out yet - AEM
[Raymond] (Ed.)
The New Hackers Dictionary/Online Jargon File
A mish-mash of history and dictionary definitions which explains why it
is so wonderful to be a hacker, and why those crackers who aren't
hackers want to be called "hackers". The Jargon File version is
available online - check an archie database for retails. Latest
revision: 2.99.
[Gasser]
Building a Secure Computer System.
By Morrie Gasser, and van Nostrand Reinhold; explains what is required
to build a secure computer system.
[Rainbow Series] (Especially the "Orange Book")
>From: epstein@trwacs.fp.trw.com (Jeremy Epstein)
>The "Rainbow Series" consists of about 25 volumes. Some of the
>more interesting ones are:
>
> The "Orange Book", or Trusted Computer Systems Evaluation
> Criteria, which describes functional and assurance
> requirements for computer systems
>
> Trusted Database Interpretation, which talks both about
> trusted databases and building systems out of trusted
> components
>
> Trusted Network Interpretation, which (obviously) talks
> about networked systems
>
>A (possibly) complete list is:
> -- Department of Defense Trusted Computer System Evaluation Criteria
> (TCSEC), aka the "Orange Book"
> -- Computer Security Subsystem Interpretation of the TCSEC
> -- Trusted Data Base Management System Interpretation of the TCSEC
> -- Trusted Network Interpretation of the TCSEC
> -- Trusted Network Interpretation Environments Guideline -- Guidance
> for Applying the Trusted Network Interpretation
> -- Trusted Unix Working Group (TRUSIX) Rationale for Selecting
> Access Control List Features for the Unix System
> -- Trusted Product Evaulations -- A Guide for Vendors
> -- Computer Security Requirements -- Guidance for Applying the DoD
> TCSEC in Specific Environments
> -- Technical Rationale Behind CSC-STD-003-85: Computer Security
> Requirements
> -- Trusted Product Evaluation Questionnaire
> -- Rating Maintenance Phase -- Program Document
> -- Guidelines for Formal Verification Systems
> -- A Guide to Understanding Audit in Trusted Systems
> -- A Guide to Understanding Trusted Facility Management
> -- A Guide to Understanding Discretionary Access Control in Trusted
> Systems
> -- A Guide to Understanding Configuration Management in Trusted
Systems
> -- A Guide to Understanding Design Documentation in Trusted Systems
> -- A Guide to Understanding Trusted Distribution in Trusted Systems
> -- A Guide to Understanding Data Remanence in Automated Information
> Systems
> -- Department of Defense Password Management Guideline
> -- Glossary of Computer Security Terms
> -- Integrity in Automated Information Systems
>
>You can get your own copy (free) of any or all of the books by
>writing or calling:
>
> INFOSEC Awareness Office
> National Computer Security Centre
> 9800 Savage Road
> Fort George G. Meade, MD 20755-6000
> Tel +1 301 766-8729
>
>If you ask to be put on the mailing list, you'll get a copy of each new
>book as it comes out (typically a couple a year).
>From: kleine@fzi.de (Karl Kleine)
>I was told that this offer is only valid for US citizens ("We only send
>this stuff to a US postal address"). Non-US people have to PAY to get
>hold of these documents. They can be ordered from NTIS, the National
>Technical Information Service:
> NTIS,
> 5285 Port Royal Rd,
> Springfield VA 22151,
> USA
> order dept phone: +1-703-487-4650, fax +1-703-321-8547
>From: Ulf Kieber <kieber@de.tu-dresden.inf.freia>
>just today I got my set of the Rainbow Series.
>
>There are three new books:
> -- A Guide to Understanding Trusted Recovery in Trusted Systems
> -- A Guide to Understanding Identification and Authentication in Trusted
> Systems
> -- A Guide to Writing the Security Features User's Guide for Trusted Systems
>
>They also shipped
> -- Advisory Memorandum on Office Automation Security Guideline
>issued by NTISS. Most of the books (except three or four) can also be
>purchased from
>
> U.S. Government Printing Office
> Superintendent of Documents
> Washington, DC 20402 phone: (202) 783-3238
>
>>-- Integrity in Automated Information Systems
>THIS book was NOT shipped to me--I'm not sure if it is still in
>the distribution.
>From: epstein@trwacs.fp.trw.com (Jeremy Epstein)
>...
>The ITSEC (Information Technology Security Evaluation Criteria) is a
>harmonized document developed by the British, German, French, and
>Netherlands governments. It separates functional and assurance
>requirements, and has many other differences from the TCSEC.
>
>You can get your copy (again, free/gratis) by writing:
>
> Commission of the European Communities
> Directorate XIII/F
> SOG-IS Secretariat
> Rue de la Loi 200
> B-1049 BRUSSELS
> Belgium
Also note that NCSC periodically publish an "Evaluated Products List"
which is the definitive statement of which products have been approved
at what TCSEC level under which TCSEC interpretations. This is useful
for separating the output of marketdroids from the truth.
Papers:
[Morris & Thompson]
Password Security, A Case History
A wonderful paper, first published in CACM in 1974, which is now often
to found in the Unix Programmer Docs supplied with many systems.
[Curry]
Improving the Security of your Unix System.
A marvellous paper detailing the basic security considerations every
Unix systems manager should know. Available as "security-doc.tar.Z"
from FTP sites (check an Archie database for your nearest site.)
[Klein]
Foiling the Cracker: A Survey of, and Improvements to, Password Security.
A thorough and reasoned analysis of password cracking trends, and the
reasoning behind techniques of password cracking. Your nearest copy
should be easily found via Archie, searching for the keyword "Foiling".
[Cheswick]
The Design of a Secure Internet Gateway.
Great stuff. It's research.att.com:/dist/Secure_Internet_Gateway.ps
[Cheswick]
An Evening With Berferd: in which a Cracker is Lured, Endured and Studied.
Funny and very readable, somewhat in the style of [Stoll] but more
condensed. research.att.com:/dist/berferd.ps
[Bellovin89]
Security Problems in the TCP/TP Protocol Suite.
A description of security problems in many of the protocols widely used
in the Internet. Not all of the discussed protocols are official
Internet Protocols (i.e. blessed by the IAB), but all are widely used.
The paper originally appeared in ACM Computer Communications Review,
Vol 19, No 2, April 1989. research.att.com:/dist/ipext.ps.Z
[Bellovin91]
Limitations of the Kerberos Authentication System
A discussion of the limitations and weaknesses of the Kerberos
Authentication System. Specific problems and solutions are presented.
Very worthwhile reading. Available on research.att.com via anonymous
ftp, originally appeared in ACM Computer Communications Review but the
revised version (identical to the online version, I think) appeared in
the Winter 1991 USENIX Conference Proceedings.
[Muffett]
Crack documentation.
The information which accompanies Crack contains a whimsical explanation
of password cracking techniques and the optimisation thereof, as well as
an incredibly long and silly diatribe on how to not choose a crackable
password. A good read for anyone who needs convincing that password
cracking is _really easy_.
[Farmer]
COPS
Read the documentation provided with COPS. Lots of hints and
philosophy. The where, why and how behind the piece of security
software that started it all.
[CERT]
maillists/advisories/clippings
CERT maintains archives of useful bits of information that it gets from
USENET and other sources. Also archives of all the security
"advisories" that it has posted (ie: little messages warning people that
there is a hole in their operating system, and where to get a fix)
[OpenSystemsSecurity]
A notorious (but apparently quite good) document, which has been dogged
by being in a weird postscript format.
>From: amesml@monu1.cc.monash.edu.au (Mark L. Ames)
>I've received many replies to my posting about Arlo Karila's paper,
>including the news (that I and many others have missed) that a
>manageable postscript file and text file are available via anonymous ftp
>from ajk.tele.fi (131.177.5.20) in the directory PublicDocuments.
These are all available for FTP browsing from "cert.sei.cmu.edu".
[RFC-1244]
Site Security Handbook
RFC-1244 : JP Holbrook & JK Reynolds (Eds.) "The Site Security Handbook"
covering incident handling and prevention. July 1991; 101 pages
(Format: TXT=259129 bytes), also called "FYI 8"
[USENET]
comp.virus: for discussions of virii and other nasties, with a PC bent.
comp.unix.admin: for general administration issues
comp.unix.<platform>: for the hardware/software that YOU use.
comp.protocols.tcp-ip: good for problems with NFS, etc.
Q.20 How silly can people get?
This section (which I hope to expand) is a forum for learning by
example; if people have a chance to read about real life (preferably
silly) security incidents, it will hopefully instill in readers some of
the zen of computer security without the pain of experiencing it.
If you have an experience that you wish to share, please send it to the
editors. It'll boost your karma no end.
---------------------------------------------------------------------------
aem@aber.ac.uk: The best story I have is of a student friend of mine
(call him Bob) who spent his industrial year at a major computer
manufacturing company. In his holidays, Bob would come back to college
and play AberMUD on my system.
Part of Bob's job at the company involved systems management, and the
company was very hot on security, so all the passwords were random
strings of letters, with no sensible order. It was imperative that the
passwords were secure (this involved writing the random passwords down
and locking them in big, heavy duty safes).
One day, on a whim, I fed the MUD persona file passwords into Crack as a
dictionary (the passwords were stored plaintext) and then ran Crack on
our systems password file. A few student accounts came up, but nothing
special. I told the students concerned to change their passwords - that
was the end of it.
Being the lazy guy I am, I forgot to remove the passwords from the Crack
dictionary, and when I posted the next version to USENET, the words went
too. It went to the comp.sources.misc moderator, came back over USENET,
and eventually wound up at Bob's company. Round trip: ~10,000 miles.
Being a cool kinda student sysadmin dude, Bob ran the new version of
Crack when it arrived. When it immediately churned out the root
password on his machine, he damn near fainted...
The moral of this story is: never use the same password in two different
places, and especially on untrusted systems (like MUDs).
--
aem@aber.ac.uk aem@uk.ac.aber aem%aber@ukacrl.bitnet mcsun!uknet!aber!aem
- send (cryptographic) comp.sources.misc material to: aem@aber.ac.uk -
*(but were afraid to ask!)
This document is meant to answer some of the questions which regularly
appear in the Usenet newsgroups "comp.security.misc" and "alt.security",
and is meant to provide some background to the subject for newcomers to
that newsgroup.
This FAQ is maintained by Alec Muffett (aem@aber.ac.uk, uknet!aber!aem),
with contributions from numerous others [perhaps]. The views expressed
in the document are the personal views of the author(s), and it should
not be inferred that they are necessarily shared by anyone with whom the
author(s) are now, or ever may be, associated.
Many thanks go to (in no particular order): Steve Bellovin, Matt Bishop,
Mark Brader, Ed DeHart, Dave Hayes, Jeffrey Hutzelman, William LeFebvre,
Wes Morgan, Rob Quinn, Chip Rosenthal, Wietse Venema, Gene Spafford,
John Wack and Randall Atkinson.
Disclaimer: Every attempt is made to ensure that the information
contained in this FAQ is up to date and accurate, but no responsibility
will be accepted for actions resulting from information gained herein.
Questions which this document addresses:
Q.1 What are alt.security and comp.security.misc for?
Q.2 Whats the difference between a hacker and a cracker?
Q.3 What is "security through obscurity"
Q.4 What makes a system insecure?
Q.5 What tools are there to aid security?
Q.6 Isn't it dangerous to give cracking tools to everyone?
Q.7 Where can I get these tools?
Q.8 Why and how do systems get broken into?
Q.9 Who can I contact if I get broken into?
Q.10 What is a firewall?
Q.11 Why shouldn't I use setuid shell scripts?
Q.12 Why shouldn't I leave "root" permanently logged on the console?
Q.13 Why shouldn't I create Unix accounts with null passwords?
Q.14 What security holes are associated with X-windows (and other WMs)?
Q.15 What security holes are associated with NFS?
Q.16 How can I generate safe passwords?
Q.17 Why are passwords so important?
Q.18 How many possible passwords are there?
Q.19 Where can I get more information?
Q.20 How silly can people get?
---------------------------------------------------------------------------
Q.1 What are alt.security and comp.security.misc for?
Comp.security.misc is a forum for the discussion of computer security,
especially those relating to Unix (and Unix like) operating systems.
Alt.security used to be the main newsgroup covering this topic, as well
as other issues such as car locks and alarm systems, but with the
creation of comp.security.misc, this may change.
This FAQ will concentrate wholly upon computer related security issues.
The discussions posted range from the likes of "What's such-and-such
system like?" and "What is the best software I can use to do so-and-so"
to "How shall we fix this particular bug?", although there is often a
low signal to noise ratio in the newsgroup (a problem which this FAQ
hopes to address).
The most common flamewars start when an apparent security novice posts a
message saying "Can someone explain how the such-and-such security hole
works?" and s/he is immediately leapt upon by a group of self appointed
people who crucify the person for asking such an "unsound" question in a
public place, and flame him/her for "obviously" being a cr/hacker.
Please remember that grilling someone over a high flame on the grounds
that they are "a possible cr/hacker" does nothing more than generate a
lot of bad feeling. If computer security issues are to be dealt with in
an effective manner, the campaigns must be brought (to a large extent)
into the open.
Implementing computer security can turn ordinary people into rampaging
paranoiacs, unable to act reasonably when faced with a new situation.
Such people take an adversarial attitude to the rest of the human race,
and if someone like this is in charge of a system, users will rapidly
find their machine becoming more restrictive and less friendly (fun?) to
use.
This can lead to embarrasing situations, eg: (in one university) banning
a head of department from the college mainframe for using a network
utility that he wasn't expected to. This apparently required a lot of
explaining to an unsympathetic committee to get sorted out.
A more sensible approach is to secure a system according to its needs,
and if its needs are great enough, isolate it completely. Please, don't
lose your sanity to the cause of computer security; it's not worth it.
Q.2 What's the difference between a hacker and a cracker?
Lets get this question out of the way right now:
On USENET, calling someone a "cracker" is an unambiguous statement that
some person persistently gets his/her kicks from breaking from into
other peoples computer systems, for a variety of reasons. S/He may pose
some weak justification for doing this, usually along the lines of
"because it's possible", but most probably does it for the "buzz" of
doing something which is illicit/illegal, and to gain status amongst a
peer group.
Particularly antisocial crackers have a vandalistic streak, and delete
filestores, crash machines, and trash running processes in pursuit of
their "kicks".
The term is also widely used to describe a person who breaks copy
protection software in microcomputer applications software in order to
keep or distribute free copies.
On USENET, calling someone a "hacker" is usually a statement that said
person holds a great deal of knowledge and expertise in the field of
computing, and is someone who is capable of exercising this expertise
with great finesse. For a more detailed definition, readers are
referred to the Jargon File [Raymond].
In the "real world", various media people have taken the word "hacker"
and coerced it into meaning the same as "cracker" - this usage
occasionally appears on USENET, with disastrous and confusing results.
Posters to the security newsgroups should note that they currently risk
a great deal of flamage if they use the word "hacker" in place of
"cracker" in their articles.
NB: nowhere in the above do I say that crackers cannot be true hackers.
It's just that I don't say that they are...
Q.3 What is "security through obscurity"
Security Through Obscurity (STO) is the belief that a system of any sort
can be secure so long as nobody outside of its implementation group is
allowed to find out anything about its internal mechanisms. Hiding
account passwords in binary files or scripts with the presumption that
"nobody will ever find it" is a prime case of STO.
STO is a philosophy favoured by many bureaucratic agencies (military,
governmental, and industrial), and it used to be a major method of
providing "pseudosecurity" in computing systems.
Its usefulness has declined in the computing world with the rise of open
systems, networking, greater understanding of programming techniques, as
well as the increase in computing power available to the average person.
The basis of STO has always been to run your system on a "need to know"
basis. If a person doesn't know how to do something which could impact
system security, then s/he isn't dangerous.
Admittedly, this is sound in theory, but it can tie you into trusting a
small group of people for as long as they live. If your employees get
an offer of better pay from somewhere else, the knowledge goes with
them, whether the knowledge is replaceable or not. Once the secret gets
out, that is the end of your security.
Nowadays there is also a greater need for the ordinary user to know
details of how your system works than ever before, and STO falls down a
as a result. Many users today have advanced knowledge of how their
operating system works, and because of their experience will be able to
guess at the bits of knowledge that they didn't "need to know". This
bypasses the whole basis of STO, and makes your security useless.
Hence there is now a need is to to create systems which attempt to be
algorithmically secure (Kerberos, Secure RPC), rather than just
philosophically secure. So long as your starting criteria can be met,
your system is LOGICALLY secure.
"Shadow Passwords" (below) are sometimes dismissed as STO, but this is
incorrect, since (strictly) STO depends on restricting access to an
algorithm or technique, whereas shadow passwords provide security by
restricting access to vital data.
Q.4 What makes a system insecure?
Switching it on. The adage usually quoted runs along these lines:
"The only system which is truly secure is one which is switched off
and unplugged, locked in a titanium lined safe, buried in a concrete
bunker, and is surrounded by nerve gas and very highly paid armed
guards. Even then, I wouldn't stake my life on it."
(the original version of this is attributed to Gene Spafford)
A system is only as secure as the people who can get at it. It can be
"totally" secure without any protection at all, so long as its continued
good operation is important to everyone who can get at it, assuming all
those people are responsible, and regular backups are made in case of
hardware problems. Many laboratory PC's quite merrily tick away the
hours like this.
The problems arise when a need (such as confidentiality) has to be
fulfilled. Once you start putting the locks on a system, it is fairly
likely that you will never stop.
Security holes manifest themselves in (broadly) four ways:
1) Physical Security Holes.
- Where the potential problem is caused by giving unauthorised persons
physical access to the machine, where this might allow them to perform
things that they shouldn't be able to do.
A good example of this would be a public workstation room where it would
be trivial for a user to reboot a machine into single-user mode and muck
around with the workstation filestore, if precautions are not taken.
Another example of this is the need to restrict access to confidential
backup tapes, which may (otherwise) be read by any user with access to
the tapes and a tape drive, whether they are meant to have permission or
not.
2) Software Security Holes
- Where the problem is caused by badly written items of "privledged"
software (daemons, cronjobs) which can be compromised into doing things
which they shouldn't oughta.
The most famous example of this is the "sendmail debug" hole (see
bibliography) which would enable a cracker to bootstrap a "root" shell.
This could be used to delete your filestore, create a new account, copy
your password file, anything.
(Contrary to popular opinion, crack attacks via sendmail were not just
restricted to the infamous "Internet Worm" - any cracker could do this
by using "telnet" to port 25 on the target machine. The story behind a
similar hole (this time in EMACS) is described in [Stoll].)
New holes like this appear all the time, and your best hopes are to:
a: try to structure your system so that as little software as possible
runs with root/daemon/bin privileges, and that which does is known to
be robust.
b: subscribe to a mailing list which can get details of problems
and/or fixes out to you as quickly as possible, and then ACT when you
receive information.
3) Incompatible Usage Security Holes
- Where, through lack of experience, or no fault of his/her own, the
System Manager assembles a combination of hardware and software which
when used as a system is seriously flawed from a security point of view.
It is the incompatibility of trying to do two unconnected but useful
things which creates the security hole.
Problems like this are a pain to find once a system is set up and
running, so it is better to build your system with them in mind. It's
never too late to have a rethink, though.
Some examples are detailed below; let's not go into them here, it would
only spoil the surprise.
4) Choosing a suitable security philosophy and maintaining it.
>From: Gene Spafford <spaf@cs.purdue.edu>
>The fourth kind of security problem is one of perception and
>understanding. Perfect software, protected hardware, and compatible
>components don't work unless you have selected an appropriate security
>policy and turned on the parts of your system that enforce it. Having
>the best password mechanism in the world is worthless if your users
>think that their login name backwards is a good password! Security is
>relative to a policy (or set of policies) and the operation of a system
>in conformance with that policy.
Q.5 What tools are there to aid security?
1) "COPS"
Managed by Dan Farmer, this is a long established suite of shell scripts
which forms an extensive security testing system; There is a rudimentary
password cracker, and routines to check the filestore for suspicious
changes in setuid programs, others to check permissions of essential
system and user files, and still more to see whether any system software
behaves in a way which could cause problems.
The software comes in two versions - one written in Perl and one
(largely equivalent) written in shell scripts. The latest version is
very up-to-date on Unix Security holes.
2) "Crack" (+ "UFC").
Written by Alec Muffett, this is a program written with one purpose in
mind: to break insecure passwords. It is probably the most efficent and
friendly password cracker that is publically available, with the ability
to let the user to specify precisely how to form the words to use as
guesses at users passwords.
It also has an inbuilt networking capability, allowing the load of
cracking to be spread over as many machines as are available on a
network, and it is supplied with an optimised version of the Unix crypt()
algorithm.
An even faster version of the crypt() algorithm, "UFC" by Michael Glad,
is freely available on the network, and the latest versions of UFC and
Crack are compatible and can be easily hooked together.
3) NPasswd (Clyde Hoover) & Passwd+ (Matt Bishop)
These programs are written to redress the balance in the password
cracking war. They provide replacements for the standard "passwd"
command, but prevent a user from selecting passwords which are easily
compromised by programs like Crack.
Several versions of these programs are available on the network, hacked
about to varying degrees in order to provide compatibility for System V
based systems, NIS/YP, shadow password schemes, etc. The usual term for
this type of program is a 'fascist' password program.
4) "Shadow" - a Shadow Password Suite
This program suite (by John F Haugh II) is a set of program and function
replacements (compatible with most Unixes) which implements shadow
passwords, ie: a system where the plaintext of the password file is
hidden from all users except root, hopefully stopping all password
cracking attempts at source. In combination with a fascist passwd
frontend, it should provide a good degree of password file robustness.
>From: jfh@rpp386.lonestar.org (John F. Haugh II)
>Shadow does much more than hide passwords. It also provides for
>terminal access control, user and group administration, and a few
>other things which I've forgotten. There are a dozen or more
>commands in the suite, plus a whole slew of library functions.
5) TCP Wrappers (Wietse Venema)
These are programs which provide a front-end filter to many of the
network services which Unix provides by default. If installed, they can
curb otherwise unrestricted access to potential dangers like incoming
FTP/TFTP, Telnet, etc, and can provide extra logging information, which
may be of use if it appears that someone is trying to break in.
6) SecureLib
>From: phil@pex.eecs.nwu.edu (William LeFebvre)
>You may want to add a mention of securelib, a security enhancer
>available for SunOS version 4.1 and higher.
>Securelib contains replacement routines for three kernel calls:
>accept(), recvfrom(), recvmsg(). These replacements are compatible with
>the originals, with the additional functionality that they check the
>Internet address of the machine initiating the connection to make sure
>that it is "allowed" to connect. A configuration file defines what
>hosts are allowed for a given program. Once these replacement routines
>are compiled, they can be used when building a new shared libc library.
>The resulting libc.so can then be put in a special place. Any program
>that should be protected can then be started with an alternate
>LD_LIBRARY_PATH.
7) SPI
>From: Gene Spafford <spaf@cs.purdue.edu>
>Sites connected with the Department of Energy and some military
>organizations may also have access to the SPI package. Interested (and
>qualified) users should contact the CIAC at LLNL for details.
>SPI is a screen-based administrator's tool that checks configuration
>options, includes a file-change (integrity) checker to monitor for
>backdoors and viruses, and various other security checks. Future
>versions will probably integrate COPS into the package. It is not
>available to the general public, but it is available to US Dept of
>Energy contractors and sites and to some US military sites. A version
>does or will exist for VMS, too. Further information on availabilty can
>be had from the folks at the DoE CIAC.
Q.6 Isn't it dangerous to give cracking tools to everyone?
That depends on your point of view. Some people have complained that
giving unrestricted public access to programs like COPS and Crack is
irresponsible because the "baddies" can get at them easily.
Alternatively, you may believe that the really bad "baddies" have had
programs like this for years, and that it's really a stupendously good
idea to give these programs to the good guys too, so that they may check
the integrity of their system before the baddies get to them.
So, who wins more from having these programs freely available? The good
guys or the bad ? You decide, but remember that less honest tools than
COPS and Crack tools were already out there, and most of the good guys
didn't have anything to help.
Q.7 Where can I get these tools?
COPS:
V1.04, available for FTP from cert.sei.cmu.edu in pub/cops and
archive.cis.ohio-state.edu in pub/cops.
Crack/UFC:
Crack v4.1f and UFC Patchlevel 1. Available from any major USENET
archive (eg: ftp.uu.net) in volume 28 of comp.sources.misc.
NPasswd:
Currently suffering from being hacked about by many different people.
Version 2.0 is in the offing, but many versions exist in many
different configurations. Will chase this up with authors - AEM
Passwd+:
"alpha version, update 3" - beta version due soon. Available from
dartmouth.edu as pub/passwd+.tar.Z
Shadow:
This is available from the comp.sources.misc directory at any major
USENET archive (see entry for Crack)
TCP Wrappers:
Available for anonymous FTP:
cert.sei.cmu.edu: pub/network_tools/tcp_wrapper.shar
ftp.win.tue.nl: pub/security/log_tcp.shar.Z
Securelib:
The latest version of securelib is available via anonymous FTP from the
host "eecs.nwu.edu". It is stored in the file "pub/securelib.tar".
Q.8 Why and how do systems get broken into?
This is hard to answer definitively. Many systems which crackers break
into are only used as a means of entry into yet more systems; by hopping
between many machines before breaking into a new one, the cracker hopes
to confuse any possible pursuers and put them off the scent. There is
an advantage to be gained in breaking into as many different sites as
possible, in order to "launder" your connections.
Another reason may be psychological: some people love to play with
computers and stretch them to the limits of their capabilities.
Some crackers might think that it's "really neat" to hop over 6 Internet
machines, 2 gateways and an X.25 network just to knock on the doors of
some really famous company or institution (eg: NASA, CERN, AT+T, UCB).
Think of it as inter-network sightseeing.
This view is certainly appealing to some crackers, and certainly leads
to both the addiction and self-perpetuation of cracking.
As to the "How" of the question, this is again a very sketchy area. In
universities, it is extremely common for computer account to be passed
back and forth between undergraduates:
"Mary gives her account password to her boyfriend Bert at another
site, who has a friend Joe who "plays around on the networks". Joe
finds other crackable accounts at Marys site, and passes them around
amongst his friends..." pretty soon, a whole society of crackers is
playing around on the machines that Mary uses.
This sort of thing happens all the time, and not just in universities.
One solution is in education. Do not let your users develop attitudes
like this one:
"It doesn't matter what password I use on _MY_ account,
after all, I only use it for laserprinting..."
- an Aberystwyth Law student, 1991
Teach them that use of the computer is a group responsibility. Make
sure that they understand that a chain is only as strong as it's weak
link.
Finally, when you're certain that they understand your problems as a
systems manager and that they totally sympathise with you, configure
your system in such a way that they can't possibly get it wrong.
Believe in user education, but don't trust to it alone.
Q.9 Who can I contact if I get broken into?
If you're connected to the Internet, you should certainly get in touch
with CERT, the Computer Emergency Response Team.
To quote the official blurb:
>From: Ed DeHart
> The Computer Emergency Response Team (CERT) was formed by the Defense
> Advanced Research Projects Agency (DARPA) in 1988 to serve as a focal
> point for the computer security concerns of Internet users. The
> Coordination Center for the CERT is located at the Software Engineering
> Institute, Carnegie Mellon University, Pittsburgh, PA.
> Internet E-mail: cert@cert.sei.cmu.edu
> Telephone: 412-268-7090 24-hour hotline:
> CERT/CC personnel answer 7:30a.m. to 6:00p.m. EST(GMT-5)/EDT(GMT-4),
> and are on call for emergencies during other hours.
...and also, the umbrella group "FIRST", which mediates between the
incident handling teams themselves...
>From: John Wack <wack@csrc.ncsl.nist.gov>
>[...] FIRST is actually a very viable and growing
>organization, of which CERT is a member. It's not actually true that,
>if you're connected to the Internet, you should call CERT only - that
>doesn't do justice to the many other response teams out there and in the
>process of forming.
>NIST is currently the FIRST secretariat; we maintain an anonymous ftp
>server with a directory of FIRST information (csrc.ncsl.nist.gov:
>~/pub/first). This directory contains a contact file that lists the
>current members and their constituencies and contact information
>(filename "first-contacts").
>While CERT is a great organization, other response teams who do handle
>incidents on their parts of the Internet merit some mention as well -
>perhaps mentioning the existence of this file would help to do that in a
>limited space.
The file mentioned is a comprehensive listing of contact points per
network for security incidents. It is too large to reproduce here, I
suggest that the reader obtains a copy for his/her self by the means
given.
Q.10 What is a firewall?
A (Internet) firewall is a machine which is attached (usually) between
your site and a wide area network. It provides controllable filtering
of network traffic, allowing restricted access to certain internet port
numbers (ie: services that your machine would otherwise provide to the
network as a whole) and blocks access to pretty well everything else.
Similar machines are available for other network types, too.
Firewalls are an effective "all-or-nothing" approach to dealing with
external access security, and they are becoming very popular, with the
rise in Internet connectivity.
For more information on these sort of topics, see the Gateway paper by
[Cheswick], below.
Q.11 Why shouldn't I use setuid shell scripts?
You shouldn't use them for a variety of reasons, mostly involving bugs
in the Unix kernel. Here are a few of the more well known problems,
some of which are fixed on more recent operating systems.
1) If the script begins "#!/bin/sh" and a link (symbolic or otherwise)
can be made to it with the name "-i", a setuid shell can be immediately
obtained because the script will be invoked: "#!/bin/sh -i", ie: an
interactive shell.
2) Many kernels suffer from a race condition which can allow you to
exchange the shellscript for another executable of your choice between
the times that the newly exec()ed process goes setuid, and when the
command interpreter gets started up. If you are persistent enough, in
theory you could get the kernel to run any program you want.
3) The IFS bug: the IFS shell variable contains a list of characters to
be treated like whitespace by a shell when parsing command names. By
changing the IFS variable to contain the "/" character, the command
"/bin/true" becomes "bin true".
All you need do is export the modified IFS variable, install a command
called "bin" in your path, and run a setuid script which calls
"/bin/true". Then "bin" will be executed whilst setuid.
If you really must write scripts to be setuid, either
a) Put a setuid wrapper in "C" around the script, being very careful
to reset IFS and PATH to something sensible before exec()ing the
script. If your system has runtime linked libraries, consider the
values of the LD_LIBRARY_PATH also.
b) Use a scripting language like Perl which has a safe setuid
facility, and is proactively rabid about security.
- but really, it's safest not to use setuid scripts at all.
Q.12 Why shouldn't I leave "root" permanently logged on the console?
Using a 'smart' terminal as console and leaving "/dev/console" world
writable whilst "root" is logged in is a potential hole. The terminal
may be vulnerable to remote control via escape sequences, and can be
used to 'type' things into the root shell. The terminal type can
usually be obtained via the "ps" command.
Various solutions to this can be devised, usually by giving the console
owner and group-write access only , and then using the setgid mechanism
on any program which has need to output to the console (eg: "write").
Q.13 Why shouldn't I create Unix accounts with null passwords?
Creating an unpassworded account to serve any purpose is potentially
dangerous, not for any direct reason, but because it can give a cracker
a toehold.
For example, on many systems you will find a unpassworded user "sync",
which allows the sysman to sync the disks without being logged in. This
appears to be both safe and innocuous.
The problem with this arises if your system is one of the many which
doesn't do checks on a user before authorising them for (say) FTP. A
cracker might be able to connect to your machine for one of a variety of
FTP methods, pretending to be user "sync" with no password, and then
copy your password file off for remote cracking.
Although there are mechanisms to prevent this sort of thing happening in
most modern vesions of Unix, to be totally secure requires an in-depth
knowledge of every package on your system, and how it deals with the
verification of users. If you can't be sure, it's probably better not
to leave holes like this around.
Another hole that having null-password accounts opens up is the
possibility (on systems with runtime linked libraries) of spoofing
system software into running your programs as the "sync" user, by
changing the LD_LIBRARY_PATH variable to a library of your own devising,
and running "login -p" or "su" to turn into that user.
Q.14 What security holes are associated with X-windows (and other WMs)?
Lots, some which affect use of X only, and some which impact the
security of the entire host system.
I would prefer not to go into too much detail here, and would refer any
reader reader looking for detailed information to the other FAQ's in
relevant newsgroups. (comp.windows.*)
One point I will make is that X is one of those packages which often
generates "Incompatible Usage" security problems, for instance the
ability for crackers to run xsessions on hosts under accounts with no
password (eg: sync), if it is improperly set up. Read the question
about unpassworded accounts in this FAQ.
Q.15 What security holes are associated with NFS?
Lots, mostly to do with who you export your disks to, and how. The
security of NFS relies heavily upon who is allowed to mount the files
that a server exports, and whether they are exported read only or not.
The exact format for specifying which hosts can mount an exported
directory varies between Unix implementations, but generally the
information is contained within the file "/etc/exports".
This file contains a list of directories and for each one, it has a
series of either specific "hosts" or "netgroups" which are allowed to
NFS mount that directory. This list is called the "access list".
The "hosts" are individual machines, whilst "netgroups" are combinations
of hosts and usernames specified in "/etc/netgroup". These are meant to
provide a method of finetuning access. Read the relevant manual page
for more information about netgroups.
The exports file also contains information about whether the directory
is to be exported as read-only, read-write, and whether super-user
access is to be allowed from clients which mount that directory.
The important point to remember is that if the access list for a
particular directory in /etc/exports contains:
1) <nothing>
Your directory can be mounted by anyone, anywhere.
2) <a specific hostname>
Your directory can be mounted by anyone permitted to run the mount
command at hostname. This might not be a trustworthy person; for
instance, if the machine is a PC running NFS, it could be anyone.
3) <a netgroup name>
If the netgroup:
a) is empty, anyone can mount your directory, from anywhere.
b) contains "(,,)", anyone can mount your directory, from anywhere.
c) contains the name of a netgroup which is empty or contains "(,,)",
anyone can mount your directory, from anywhere.
d) contains "(hostname,,)", anyone on the named host who is permissioned
to mount files can mount your directory.
e) contains "(,username,)", the named user can mount your directory,
from anywhere.
4) <a word which is neither a hostname or a netgroup>
If you meant to export the directory to the host "athena" but actually
type "ahtena", the word "ahtena" is taken as a netgroup name, is found
to be an empty netgroup, and thus the directory can be mounted by
anyone, anywhere.
So, if you aren't careful about what you put into /etc/exports and
/etc/netgroup you could find that a user with a PC could
a) mount your mainframe filestore as a network disk
b) edit your /etc/passwd or .rhosts or /etc/hosts.equiv ...
c) log into your mainframe as another user, possibly "root"
Disclaimer: The above information may not be true for all platforms
which provide an NFS serving capability, but is true for all of the ones
in my experience (AEM). It should be noted that the SAFE way to create
an "empty" netgroup entry is:
ngname (-,-,-)
Which is a netgroup which matches no-one on no-host on no-NIS-domain.
[ I am STILL working on PC NFS packages / ethics at the moment - AEM ]
Q.16 How can I generate safe passwords?
You can't. The key word here is GENERATE. Once an algorithm for
creating passwords is specified using upon some systematic method, it
merely becomes a matter of analysing your algorithm in order to find
every password on your system.
Unless the algorithm is very subtle, it will probably suffer from a very
low period (ie: it will soon start to repeat itself) so that either:
a) a cracker can try out every possible output of the password
generator on every user of the system, or
b) the cracker can analyse the output of the password program,
determine the algorithm being used, and apply the algorithm to other
users to determine their passwords.
A beautiful example of this (where it was disastrously assumed that a
random number generator could generate an infinite number of random
passwords) is detailed in [Morris & Thompson].
The only way to get a reasonable amount of variety in your passwords
(I'm afraid) is to make them up. Work out some flexible method of your
own which is NOT based upon:
1) modifying any part of your name or name+initials
2) modifying a dictionary word
3) acronyms
4) any systematic, well-adhered-to algorithm whatsoever
For instance, NEVER use passwords like:
alec7 - it's based on the users name (& it's too short anyway)
tteffum - based on the users name again
gillian - girlfiends name (in a dictionary)
naillig - ditto, backwards
PORSCHE911 - it's in a dictionary
12345678 - it's in a dictionary (& people can watch you type it easily)
qwertyui - ...ditto...
abcxyz - ...ditto...
0ooooooo - ...ditto...
Computer - just because it's capitalised doesn't make it safe
wombat6 - ditto for appending some random character
6wombat - ditto for prepending some random character
merde3 - even for french words...
mr.spock - it's in a sci-fi dictionary
zeolite - it's in a geological dictionary
ze0lite - corrupted version of a word in a geological dictionary
ze0l1te - ...ditto...
Z30L1T3 - ...ditto...
I hope that these examples emphasise that ANY password derived from ANY
dictionary word (or personal information), modified in ANY way,
constitutes a potentially guessable password.
For more detailed information in the same vein, you should read the
APPENDIX files which accompany Crack [Muffett].
Q.17 Why are passwords so important?
Because they are the first line of defence against interactive attacks
on your system. It can be stated simply: if a cracker cannot interact
with your system(s), and he has no access to read or write the
information contained in the password file, then he has almost no
avenues of attack left open to break your system.
This is also why, if a cracker can at least read your password file (and
if you are on a vanilla modern Unix, you should assume this) it is so
important that he is not able to break any of the passwords contained
therein. If he can, then it is also fair to assume that he can (a) log
on to your system and can then (b) break into "root" via an operating
system hole.
Q.18 How many possible passwords are there?
Most people ask this at one time or another, worried that programs like
Crack will eventually grow in power until they can do a completely
exhaustive search of all possible passwords, to break into a specific
users' account - usually root.
If (to simplify the maths) we make the assumptions that:
1) Valid passwords are created from a set of 62 chars [A-Za-z0-9]
2) Valid passwords are to be between 5 and 8 chars long
Then the size of the set of all valid passwords is: (in base 62)
100000 +
1000000 +
10000000 +
100000000 =
---------
111100000 (base 62)
A figure which is far too large to usefully undertake an exhaustive
search with current technologies. Don't forget, however, that passwords
CAN be made up with even more characters then this; you can use <space>,
all the punctuation characters, and symbols (~<>|\#$%^&*) too. If you
can use some of all the 95 non-control characters in passwords, this
increases the search space for a cracker to cover even further.
However, it's still MUCH more efficient for a cracker to get a copy of
"Crack", break into ANY account on the system (you only need one), log
onto the machine, and spoof his way up to root priviledges via operating
systems holes.
Take comfort from these figures. If you can slam the door in the face
of a potential crackers with a robust password file, you have sealed
most of the major avenues of attack immediately.
Q.19 Where can I get more information?
Books:
[Kochan & Wood]
Unix System Security
A little dated for modern matters, but still a very good book on the
basics of Unix security.
[Spafford & Garfinkel]
Practical Unix Security
This wonderful book is a worthy successor to the above, and covers a
wide variety of the topics which the Unix (and some non Unix) system
manager of the 90's will come across.
>From: Gene Spafford <spaf@cs.purdue.edu>
>Mention appendix E in "Practical Unix Security."
Okay: Appendix E contains an extensive bibliography with even more
pointers to security books than this FAQ contains.
[Stoll]
The Cuckoo's Egg
A real life 1980's thriller detailing the tracing of a cracker from
Berkeley across the USA and over the Atlantic to Germany. An excellent
view from all points: a good read, informative about security, funny,
and a good illustration of the cracker psyche. Contains an excellent
recipie for chocolate chip cookies.
A videotape of the "NOVA" (PBS's Science Program on TV) episode that
explained/reenacted this story is available from PBS Home Video. They
have a toll-free 800 number within North America.
I believe that this program was aired on the BBC's "HORIZON" program,
and thus will be available from BBC Enterprises, but I haven't checked
this out yet - AEM
[Raymond] (Ed.)
The New Hackers Dictionary/Online Jargon File
A mish-mash of history and dictionary definitions which explains why it
is so wonderful to be a hacker, and why those crackers who aren't
hackers want to be called "hackers". The Jargon File version is
available online - check an archie database for retails. Latest
revision: 2.99.
[Gasser]
Building a Secure Computer System.
By Morrie Gasser, and van Nostrand Reinhold; explains what is required
to build a secure computer system.
[Rainbow Series] (Especially the "Orange Book")
>From: epstein@trwacs.fp.trw.com (Jeremy Epstein)
>The "Rainbow Series" consists of about 25 volumes. Some of the
>more interesting ones are:
>
> The "Orange Book", or Trusted Computer Systems Evaluation
> Criteria, which describes functional and assurance
> requirements for computer systems
>
> Trusted Database Interpretation, which talks both about
> trusted databases and building systems out of trusted
> components
>
> Trusted Network Interpretation, which (obviously) talks
> about networked systems
>
>A (possibly) complete list is:
> -- Department of Defense Trusted Computer System Evaluation Criteria
> (TCSEC), aka the "Orange Book"
> -- Computer Security Subsystem Interpretation of the TCSEC
> -- Trusted Data Base Management System Interpretation of the TCSEC
> -- Trusted Network Interpretation of the TCSEC
> -- Trusted Network Interpretation Environments Guideline -- Guidance
> for Applying the Trusted Network Interpretation
> -- Trusted Unix Working Group (TRUSIX) Rationale for Selecting
> Access Control List Features for the Unix System
> -- Trusted Product Evaulations -- A Guide for Vendors
> -- Computer Security Requirements -- Guidance for Applying the DoD
> TCSEC in Specific Environments
> -- Technical Rationale Behind CSC-STD-003-85: Computer Security
> Requirements
> -- Trusted Product Evaluation Questionnaire
> -- Rating Maintenance Phase -- Program Document
> -- Guidelines for Formal Verification Systems
> -- A Guide to Understanding Audit in Trusted Systems
> -- A Guide to Understanding Trusted Facility Management
> -- A Guide to Understanding Discretionary Access Control in Trusted
> Systems
> -- A Guide to Understanding Configuration Management in Trusted
Systems
> -- A Guide to Understanding Design Documentation in Trusted Systems
> -- A Guide to Understanding Trusted Distribution in Trusted Systems
> -- A Guide to Understanding Data Remanence in Automated Information
> Systems
> -- Department of Defense Password Management Guideline
> -- Glossary of Computer Security Terms
> -- Integrity in Automated Information Systems
>
>You can get your own copy (free) of any or all of the books by
>writing or calling:
>
> INFOSEC Awareness Office
> National Computer Security Centre
> 9800 Savage Road
> Fort George G. Meade, MD 20755-6000
> Tel +1 301 766-8729
>
>If you ask to be put on the mailing list, you'll get a copy of each new
>book as it comes out (typically a couple a year).
>From: kleine@fzi.de (Karl Kleine)
>I was told that this offer is only valid for US citizens ("We only send
>this stuff to a US postal address"). Non-US people have to PAY to get
>hold of these documents. They can be ordered from NTIS, the National
>Technical Information Service:
> NTIS,
> 5285 Port Royal Rd,
> Springfield VA 22151,
> USA
> order dept phone: +1-703-487-4650, fax +1-703-321-8547
>From: Ulf Kieber <kieber@de.tu-dresden.inf.freia>
>just today I got my set of the Rainbow Series.
>
>There are three new books:
> -- A Guide to Understanding Trusted Recovery in Trusted Systems
> -- A Guide to Understanding Identification and Authentication in Trusted
> Systems
> -- A Guide to Writing the Security Features User's Guide for Trusted Systems
>
>They also shipped
> -- Advisory Memorandum on Office Automation Security Guideline
>issued by NTISS. Most of the books (except three or four) can also be
>purchased from
>
> U.S. Government Printing Office
> Superintendent of Documents
> Washington, DC 20402 phone: (202) 783-3238
>
>>-- Integrity in Automated Information Systems
>THIS book was NOT shipped to me--I'm not sure if it is still in
>the distribution.
>From: epstein@trwacs.fp.trw.com (Jeremy Epstein)
>...
>The ITSEC (Information Technology Security Evaluation Criteria) is a
>harmonized document developed by the British, German, French, and
>Netherlands governments. It separates functional and assurance
>requirements, and has many other differences from the TCSEC.
>
>You can get your copy (again, free/gratis) by writing:
>
> Commission of the European Communities
> Directorate XIII/F
> SOG-IS Secretariat
> Rue de la Loi 200
> B-1049 BRUSSELS
> Belgium
Also note that NCSC periodically publish an "Evaluated Products List"
which is the definitive statement of which products have been approved
at what TCSEC level under which TCSEC interpretations. This is useful
for separating the output of marketdroids from the truth.
Papers:
[Morris & Thompson]
Password Security, A Case History
A wonderful paper, first published in CACM in 1974, which is now often
to found in the Unix Programmer Docs supplied with many systems.
[Curry]
Improving the Security of your Unix System.
A marvellous paper detailing the basic security considerations every
Unix systems manager should know. Available as "security-doc.tar.Z"
from FTP sites (check an Archie database for your nearest site.)
[Klein]
Foiling the Cracker: A Survey of, and Improvements to, Password Security.
A thorough and reasoned analysis of password cracking trends, and the
reasoning behind techniques of password cracking. Your nearest copy
should be easily found via Archie, searching for the keyword "Foiling".
[Cheswick]
The Design of a Secure Internet Gateway.
Great stuff. It's research.att.com:/dist/Secure_Internet_Gateway.ps
[Cheswick]
An Evening With Berferd: in which a Cracker is Lured, Endured and Studied.
Funny and very readable, somewhat in the style of [Stoll] but more
condensed. research.att.com:/dist/berferd.ps
[Bellovin89]
Security Problems in the TCP/TP Protocol Suite.
A description of security problems in many of the protocols widely used
in the Internet. Not all of the discussed protocols are official
Internet Protocols (i.e. blessed by the IAB), but all are widely used.
The paper originally appeared in ACM Computer Communications Review,
Vol 19, No 2, April 1989. research.att.com:/dist/ipext.ps.Z
[Bellovin91]
Limitations of the Kerberos Authentication System
A discussion of the limitations and weaknesses of the Kerberos
Authentication System. Specific problems and solutions are presented.
Very worthwhile reading. Available on research.att.com via anonymous
ftp, originally appeared in ACM Computer Communications Review but the
revised version (identical to the online version, I think) appeared in
the Winter 1991 USENIX Conference Proceedings.
[Muffett]
Crack documentation.
The information which accompanies Crack contains a whimsical explanation
of password cracking techniques and the optimisation thereof, as well as
an incredibly long and silly diatribe on how to not choose a crackable
password. A good read for anyone who needs convincing that password
cracking is _really easy_.
[Farmer]
COPS
Read the documentation provided with COPS. Lots of hints and
philosophy. The where, why and how behind the piece of security
software that started it all.
[CERT]
maillists/advisories/clippings
CERT maintains archives of useful bits of information that it gets from
USENET and other sources. Also archives of all the security
"advisories" that it has posted (ie: little messages warning people that
there is a hole in their operating system, and where to get a fix)
[OpenSystemsSecurity]
A notorious (but apparently quite good) document, which has been dogged
by being in a weird postscript format.
>From: amesml@monu1.cc.monash.edu.au (Mark L. Ames)
>I've received many replies to my posting about Arlo Karila's paper,
>including the news (that I and many others have missed) that a
>manageable postscript file and text file are available via anonymous ftp
>from ajk.tele.fi (131.177.5.20) in the directory PublicDocuments.
These are all available for FTP browsing from "cert.sei.cmu.edu".
[RFC-1244]
Site Security Handbook
RFC-1244 : JP Holbrook & JK Reynolds (Eds.) "The Site Security Handbook"
covering incident handling and prevention. July 1991; 101 pages
(Format: TXT=259129 bytes), also called "FYI 8"
[USENET]
comp.virus: for discussions of virii and other nasties, with a PC bent.
comp.unix.admin: for general administration issues
comp.unix.<platform>: for the hardware/software that YOU use.
comp.protocols.tcp-ip: good for problems with NFS, etc.
Q.20 How silly can people get?
This section (which I hope to expand) is a forum for learning by
example; if people have a chance to read about real life (preferably
silly) security incidents, it will hopefully instill in readers some of
the zen of computer security without the pain of experiencing it.
If you have an experience that you wish to share, please send it to the
editors. It'll boost your karma no end.
---------------------------------------------------------------------------
aem@aber.ac.uk: The best story I have is of a student friend of mine
(call him Bob) who spent his industrial year at a major computer
manufacturing company. In his holidays, Bob would come back to college
and play AberMUD on my system.
Part of Bob's job at the company involved systems management, and the
company was very hot on security, so all the passwords were random
strings of letters, with no sensible order. It was imperative that the
passwords were secure (this involved writing the random passwords down
and locking them in big, heavy duty safes).
One day, on a whim, I fed the MUD persona file passwords into Crack as a
dictionary (the passwords were stored plaintext) and then ran Crack on
our systems password file. A few student accounts came up, but nothing
special. I told the students concerned to change their passwords - that
was the end of it.
Being the lazy guy I am, I forgot to remove the passwords from the Crack
dictionary, and when I posted the next version to USENET, the words went
too. It went to the comp.sources.misc moderator, came back over USENET,
and eventually wound up at Bob's company. Round trip: ~10,000 miles.
Being a cool kinda student sysadmin dude, Bob ran the new version of
Crack when it arrived. When it immediately churned out the root
password on his machine, he damn near fainted...
The moral of this story is: never use the same password in two different
places, and especially on untrusted systems (like MUDs).
--
aem@aber.ac.uk aem@uk.ac.aber aem%aber@ukacrl.bitnet mcsun!uknet!aber!aem
- send (cryptographic) comp.sources.misc material to: aem@aber.ac.uk -
No comments:
Post a Comment