About Linux IRC Projects Personal Search Home

User Resource Limits

What are user resource limits?

User resource limits dictate the amount of resources that can be used for a particular session. The resources that can be controled are:

  • maximum size of core files
  • maximum size of a process's data segment
  • maximum size of files created
  • maximum size that may be locked into memory
  • maximum size of resident memory
  • maximum number of file descriptors open at one time
  • maximum size of the stack
  • maximum amount of cpu time used
  • maximum number of processes allowed
  • maximum size of virtual memory available

It is important to note that these settings are per-session. This means that they are only effective for the time that the user is logged in (and for any processes that they run during that period). They are not global settings. In other words, they are only active for the duration of the session and the settings are not cumulative. For example, if you set the maximum number of processes to 11, the user may only have 11 processes running per session. They are not limited to 11 total processes on the machine as they may initiate another session. Each of the settings are per process settings during the session, with the exception of the maximum number of processes.

There are two types of limits that can be set for each property listed above, a hard and soft limit. A hard limit cannot be changed by the user once it is set. A soft limit, however, can be changed by the user but cannot exceed the hard limit.

Why are user resource limits important?

User resource limits are important to the security and stability of a machine. If the machine that you wish to secure allows untrusted users to run programs, then there is risk for abuse. Abuse is not limited to users that have telnet shell access. If you allow users to run CGI scripts on your web-server, these are also untrusted programs. Your system can be abused just as easily from CGI scripts. The potential exists for a malicious user to run a program that would eat up resources on the machine and make it unstable or completely unusable for all the other users. This type of Denial of Service (DoS) Attack is preventable with appropriate resource limits. It is also possible that a well intentioned user who may make a simple programming error which could cause their program to use unncessary amounts of system resources. Both of these situations can be prevented by the proper implementation of resource limits. This is necessary for the stability of any Linux machine.

Examples of Attacks

Before we look at how to configure user resource limits, it is important to also look at two sample types of attacks that use up resources. These are simple example C programs that illustrate why resource limits are so important. They are provided here for educational purposes only, do not run them on a machine unless you have explicit permission to do so. These are only two examples that I could think of off the top of my head, there are probably many many more types of resorce usage attacks. You can fend off the damage from these by proper resource limitations.

Fork Bombs

The following C program, as simple as it is, has the potential to bring a machine to it's knees without proper user resource limits.

int main()
	return 0;

This program runs an infinite loop calling the fork() function. Fork creates a child process that is virtually identical to the parent. In other words, it creates another instance of the program . As you can see, unrestrained this would create an incredible number of processes and as a result the system will eventually lock up and become unstable. If you are fortunate enough to catch someone doing this before it gets too bad, you might be able to kill some of the processes. However, this is not a viable solution to the problem.

Here is where user resource limits come in. It is possible to limit the number of processes that a user can launch in a single session. In other words, the user who has telnetted in and runs this program can be limited to the number of times the fork program can create child processes. While their program will still raise the system's load, a proper limit on processes will stop this from going out of control.

Memory Usage Attack

It is also possible that a malicious user may attack your system by using all the available RAM. This can also happen by a rogue process that does not properly manage its RAM. The following is an example of a program that's only purpose is to allocate as much RAM as possible and never return it for use:

int main()

        char *p;
                p = (char *)malloc(sizeof(char) * 4096);
	return 0;

Without any resource limits, this program will continue to allocate RAM until there is no more RAM or swap space available. This may take a while depending on the size of RAM and swap available, but it will eventually render the system useless and require a reboot.

Setting user resource limits

There are two main methods of implementing user resource limits on a Linux machine. One involves using the properties of the login shell to set these limits, the other uses PAM (Pluggable Authentication Modules). Each have some advantages but in general they accomplish the same goal: to limit the abuse that can occur on a machine.

Setting limits via the shell

Most login shells provide a method of setting user resource limits. A login shell is a program such as bash that provides an interface to the user when they log into the machine via telnet. The major advantages to using this method of setting resource limits is that all Linux machines have login shells such as bash. The primary disadvantage is that the limits will only apply to users that have shell access to the server and will only apply to programs ran from the shell.

For our purposes, we will focus on the bash login shell. These same principles can be applied to tcsh and other shells. Consult the documentation for your respective shell to determine how to set resource limits.

Bash has a built-in command called ulimit that enables the user to set resource limits. Burried deep in the bash manpage, it is documented as follows:

ulimit [-SHacdflmnpstuv [limit]]

Provides control over the resources  available  to  the
shell  and  to processes started by it, on systems that
allow such control.  The value of limit can be a number
in  the  unit  specified for the resource, or the value
unlimited.  The -H and -S options specify that the hard
or  soft  limit  is set for the given resource.  A hard
limit cannot be increased once it is set; a soft  limit
may be increased up to the value of the hard limit.  If
neither -H nor -S is specified, both the soft and  hard
limits are set.  If limit is omitted, the current value
of the soft limit of the resource  is  printed,  unless
the -H option is given.  When more than one resource is
specified, the limit name and unit are  printed  before
the value.  Other options are interpreted as follows:

-a   All current limits are reported
-c   The maximum size of core files created
-d   The maximum size of a process's data segment
-f   The maximum size of files created by the shell
-l   The maximum size that may be locked into memory
-m   The maximum resident set size
-n   The maximum number of open file descriptors  (most
     systems do not allow this value to be set)
-p   The pipe size in 512-byte blocks (this may not  be
-s   The maximum stack size
-t   The maximum amount of cpu time in seconds
-u   The maximum number of  processes  available  to  a
     single user
-v   The maximum amount of virtual memory available  to
     the shell

If limit is given, it is the new value of the specified
resource (the -a option is display only).  If no option
is given, then -f is assumed.  Values are in  1024-byte
increments,  except  for  -t,  which is in seconds, -p,
which is in units of 512-byte blocks, and  -n  and  -u,
which  are  unscaled  values.   The  return status is 0
unless an invalid option is encountered, a  non-numeric
argument  other than unlimited is supplied as limit, or
an error occurs while setting a new limit.

As you can see, it is fairly straighforward to use the ulimit command. However, the question still remains as to how we enforce these commands every time a user logs in to the shell. Fortunately, the contents of /etc/profile are executed every time bash is launched as a login shell. It is then possible to place a series of ulimit commands to set a hard limit on various resources.

Setting limits via PAM modules

If you have a system that uses PAM (Pluggable Authentication Modules) for authentication, you have another option for user resource limits. PAM comes with a resource limits module that is well documented here.

Authentication with PAM for various different applications are controled by the files located in /etc/pam.d. To set limits for users that login via telnet, you need to first edit /etc/pam.d/login and add the following line at the end of the series of lines that begin with session:

session  required       /lib/security/pam_limits.so

It is possible that this line may allready be there depending on what distribution you are using, you should not add it again if it is allready there.

The limits are then managed by editing the /etc/security/limits.conf file. See the documentation for the format of this file.

The major advantages of using PAM for resource limits is that different users can easily be assigned different limits as well as users that are a memeber of a group. PAM also allows you to limit the priority (see man nice for more information) that the processes will be assigned and to determine the maximum number of logins (to prevent the user from logging in many times to attempt to avoid the process limitations).

Real-World Usage Examples

So far we have covered the theory behind user limits and how they can be implemented. What documentation generally lacks are examples from real-world scenarios. Hopefully the following examples will prove to be useful.

Example using limits set by the shell

We have a machine that many untrusted users are allowed access to. The resources of the system need to be preserved and we choose the method of using user limits imposed by the shell.

We want to limit all users except the user root. We only want to allow a maximum of 40 open files per process, a maximum resident set size of 16384 Kbytes (16MB), maximum data segment of 16384 Kbytes (16MB), a maximum stack size of 8192 Kbytes (8MB), and a maximum core size of 16384 Kbytes (16MB).

To accomplish this, we can place the following in the /etc/profile file.

if [ "$USER" != "root" ]; then
        ulimit -n 40 -u 16 -m 16384 -d 16384 -s 8192 -c 16384

Now when all users except root log in, they will have the resource limits described above set for their session. The hard and soft limit will be set to the same number for each of the types that appear in the preceeding ulimit code. These limits will stay no matter what programs they run in that session. You can verify that the limits are set by checking typing ulimit -a in the shell.

The actual values that you choose to limit will vary. The above limits are being used successfully on a machine that has many untrusted users that are allowed to run programs. It may be more appropriate to choose lower resource limits (especially those regarding memory usage) depending on the hardware you have. You can always raise the limits by editing the /etc/profile file if you find that the limits are too restrictive or not restrictive enough.

Example using limits set by PAM

As we said before, PAM enables us to excercise a greater deal of flexibility over who the resource limits apply to. In this example, we have users classified into three groups: trusted, limtrst, and notrust. Users in the trusted group are generally expected to be trusted users that don't run programs that could use more resources than needed and so the resource limits on those users do not need to be very strict. Users in the limtrst are generally trusted but could be running programs that might accidentally use too many resources. Users in the notrust group are the least trusted, and will have very tight restrictions on what resources they can use. We expect that if there was to be a resources attack, it would come from the users in the notrust group

First, we have edited the /etc/pam.d/login and /etc/pam.d/sshd files and added the limits module as described here. This will make sure that both telnet and ssh connections are resource limited. We also may wish to do this for the /etc/pam.d/ftpd to limit ftpd resource usage as well.

Now we must decide what resourece limits to impose on each group. The following are generally arbitrary numbers. The numbers you should use would be based on the amount of memory you have. I have found it to be more of a trial and error to determine what limits are too restrictive or need to be more restrictive.

  • Group trusted
    • maximum size of core files: 10MB
    • maximum size of a process's data segment: no limit
    • maximum size of files created: no limit
    • maximum size that may be locked into memory: 50MB
    • maximum size of resident memory: 50MB
    • maximum number of file descriptors open at one time: 1024
    • maximum size of the stack: 50MB
    • maximum amount of cpu time used: no limit
    • maximum number of processes allowed: 35
    • maximum size of virtual memory available: 100MB
  • Group limtrst
    • maximum size of core files: 10MB
    • maximum size of a process's data segment: 30MB
    • maximum size of files created: no limit
    • maximum size that may be locked into memory: 30MB
    • maximum size of resident memory: 30MB
    • maximum number of file descriptors open at one time: 512
    • maximum size of the stack: 30MB
    • maximum amount of cpu time used: 600 seconds (10 minutes)
    • maximum number of processes allowed: 25
    • maximum size of virtual memory available: 50MB
  • Group notrust
    • maximum size of core files: 5MB
    • maximum size of a process's data segment: 15MB
    • maximum size of files created: 50MB
    • maximum size that may be locked into memory: 15MB
    • maximum size of resident memory: 15MB
    • maximum number of file descriptors open at one time: 64
    • maximum size of the stack: 15MB
    • maximum amount of cpu time used: 120 seconds (2 minutes)
    • maximum number of processes allowed: 15
    • maximum size of virtual memory available: 25MB

To apply the preceeding resource limits, the following would go into the /etc/security/limits.conf file:

# limits for group "trusted"
@trusted	hard	core	10240
@trusted	hard	memlock 51200
@trusted	hard	rss	51200
@trusted	hard	nofile 	1024
@trusted	hard 	stack	51200
@trusted	hard	nproc	35
@trusted	hard	as	102400

# limits for group "limtrst"
@limtrst	hard	core	10240
@limtrst	hard	data 	30720
@limtrst	hard	memlock 30720
@limtrst	hard	rss	30720
@limtrst	hard	nofile 	512
@limtrst	hard 	stack	30720
@limtrst	hard 	cpu	10
@limtrst	hard	nproc	25
@limtrst	hard	as	51200

# limits for group "notrust"
@notrust	hard	core	5120
@notrust	hard	data 	15360
@notrust	hard	fsize	51200
@notrust	hard	memlock 15360
@notrust	hard	rss	15360
@notrust	hard	nofile 	64
@notrust	hard 	stack	15360
@notrust	hard 	cpu	2
@notrust	hard	nproc	15
@notrust	hard	as	25600

Now we can assign our users to one of those three groups depending on what their level of trust is with regards to what they may run on the machine that could use resources.

Limitations of current implementation

There are some limitations with the current implementation of user resource limits. The largest is that you can only apply resource limits per session. There is no way at the moment to place a quota on the number of resources a certain user may use globally on the system.

At the moment, there is also no way to limit what is called from crontab (and possibly the same problem exists for at as well). Crontab enables a user to launch a program at a specific time. There is no way to apply resource limits to these launched programs in crontab's present form.

CGI scripts also pose a problem. I mentioned before that even if you disallow shell access but still allow users to run CGI scripts, there is the same risk involved that a user could use too many system resources. The best way to limit this is to run all cgi scripts through a program called cgiwrap. You should specifically compile cgiwrap with the --with-rlimit- settings to impose resource limits on all CGI scripts. There does not appear to be a way to impose different limits on different user's CGI scripts, however. The configuration of cgiwrap is beyond the scope of this document, but it is highly recommended that you look into using it.


It is imperative that you employ some set of resource limits on any Linux machines that you administer. Even if it is just a desktop machine and not a server, user system resource limits can go a long way to avoiding crashes because of runaway processes.

I have made every effort to explain this subject to the best of my knowledge. However, there may be errors in the document. If you find one of these errors, please let me know. This information is supplied with NO WARRANTY whatsoever, and any damage caused by the precedding information is the responsibilty of the user and not myself.

The contents of this document are licenced under the GNU Free Documentation License Version 1.1.


Site Usage Disclaimer Site content copyright © 1999-2001 by dfdtech.net Printer Friendly