Of course, the more complex the database the more parameters you can change. RDBMS's like Oracle for example have dozens of different parameters that can alter the memory usage of the database to better fit with your application needs. This is reasonable, and I've seen spectacular improvements on specific systems just because a single memory setting was changed.
As always, the usual rules of tuning apply here, change only one thing at a time and change only when you need to do it. And like any other tuning effort, the process should iteratively make a change and test it.
Different operating systems and databases have different means of checking the results of your changes. The most expensive ones like Oracle and DB2 have even dedicated packages from third parties that can help you monitor detailed usage of resources, CPU, disk IO, cache and memory usage. The less sophisticated ones may lack those capabilities, but usually the operating system has already enough facilities (/proc filesystem, Windows Performance Monitor) to give you at least a good overview of the resources that are being used and how much.
It therefore follows that each change is usually followed by a number of tests that try to reproduce the most stressing scenarios and evaluate the performance of each. Let me say that this is sometimes a nightmare. I'm happy with the scenario "I have this 2 secons query and I need to execute it in 0.02 secs" because that makes for quick and short tests, and those allow a much more rapid change/test cycle. But sometimes when you're dealing with processes that take eight hours to execute, it's a pain in the neck to have to wait from one day to another just to know the effects of a change.
One of the closely monitored parameters is always, or should be, swap file usage. But I've read some comments that somehow were implying that if your system was not using the swap file it was not correctly set up. Let me dispell that myth once and for all.
The origins of the swap file
In the beginning there was memory, just one kind of it. The primitive computers fifty years ago did not have the distinction between RAM and disk memory. All memory was just memory, and some rotating drum or other device provided that. Then the transistor industry began the craze and the distinction between fast memory (RAM) and disk memory was born. Essentially, the CPU can use a limited (but the word "limited" applies to 2GB here) address space of memory that can be used in a matter of nanoseconds. Were this memory the only available, most enterprise databases of today would not exist, as they easily exceed that size without a blink. That's why there's a second type of memory which is usually called "mass storage" or just the disk, that has much much greater capacity. However, as always there's no free lunch, and this memory is much much slower to access. And much slower means 100 or 1000 times slower.
That's why the operating system always tries to load into RAM the program instructions that have to be executed, as it would greatly increase the time of a program to run if its instructions had to be fetched from the slow memory instead of the fast one. Other data, such as the application own data, is usually stored on disk and recalled on demand. Of course, when that happens the program execution slows down enormously, as it has to wait for the disk to retrieve the requested piece of data, but not as much as if the instructions themselves were retrieved from the disk one by one.
This balance of disk memory and RAM memory has served the computing landscape well over the last thirty years or so. As applications became more complex, they required more and more memory for its instructions to be stored. As applications handled progressively biggest amounts of data, they required the auxiliary, slower memory, to be bigger. Technology was always provinding updates in the form of fastest CPUs, fastest RAM and fastest disks. Faster RAM has remained to this day a scarce resource in comparison, with disk drives reaching terabytes while RAM merely increasing in gigabite steps. Oh, and way more expensive than disk space. Thus, RAM was a resource that was carefully administered and assigned by system owners, because a program that could not allocate the amount of RAM needed in the machine would usually stop dead cold its processing thus frustrating the run.
However, as computers were applied to more problem areas, it became clear that it was just not possible to anticipate in advance how much RAM you were going to need. Perhaps in some situations yes, were the number of applications and its usage pattern was precisely know, it was possible to specify how much RAM you would need at each point and thus have enough of it in the machine.
With the advent of the workstation and the multiuser machine this became impossible. Since everybody could use the machine at any time with any mix of applications, it was no longer possible to delicately administer the RAM so that every application could run smoothly and not stop because of lack of memory.
Operating system and CPU designers realized that it was not necessary for a machine to actually have all the memory that was being demanded by the applications being executed. It was enough to appear to have it and provide on demand the regions of memory that were used at some point. This would also relieve from the need of manual intervention and adminstration of machine resources. They could just use the fast RAM memory until exhausted, and at that point start to use the slow disk based memory as applications demanded more memory. Of course, at the price of performance, but the reasoning was that it was better to have a slow running application than no application running. Thus, virtual memory was born.
Along those basic concepts, of course the technology has evolved to a point where virtual memory has become a mix of art and science, and every operating system that is in popular use contains a fairly sophisticated implementation of it. Jargon like "commited", "dirty", "shared", "working set" and other words are abundant in the literature, and I confess to know only the basic meaning of them, enough to get the work done. But it's a fantastic facility because it allows you to basically use your machine like it had no limits on its physical memory and get reasonable performance out of it. It's a fasciating dance to watch how the machine allocates and releases virtual memory as you open and close applications or even perform certain tasks with them.
Ok, I hear you ask, what is the "swap file" then? Generically, the place in the disk where the operating system puts the portions of RAM that are being faked as actual RAM is called the "swap file" In some operating systems this is not actually a file but a dedicated disk partition or even a different physical disk.
Why swapping is bad for your server
Of course, this virtual memory thing is all well and good, but it adds one element of uncertainity to the run time of a process, something that is sometimes undesirable. In fact, real time or close to real time systems usually forbid virtual memory altogether to be able to maintain its commitment to execution time. And it works to a point, because when your machine reaches the point of too many processes competing for the scarce RAM space, the machine spends more time bringing memory pages from the disk than actually doing any useful application work. This is what is called trashing and is bad, very bad for performance.
Workstations and servers - two different worlds
All the above applies very well to your workstation or PC machine, as you're usually using it without any predefined pattern or restrictions. You start your word processor now and need some numbers from your spreadsheet, then decide to put all together in some slides. You launch programs and close them without being really aware of how much RAM you're taking up. And you're willing to put up with a small performance loss as your systems start swapping (which is the word used to denote the point where your machine runs out of actual physical RAM and starts using the disk as a slow replacement), as it surely the convenience beats the need for yourself to organize and manage the execution order of your applications. As long as the illusion works, you're living in a fantastic world of almost unlimited memory.
Life of a server is very different. There are some characteristics that make a machine we call a "server". For one thing, it does not usually sits on your desk, but lives in a conditioned place with redundant power, protected from excessive heat or cold, and its access is restricted by a locked door for which only a selected few have the key. It's components are usually higher rated in reliability than its desktop workstation counterparts, even having probably not cutting edge performance.
All that is for a good reason, because a server usually performs some specific function on which the work of many people depends. There are file servers, application servers, database server, authentication servers, proxy servers, directory servers, name servers, and so on. Sometimes, the same machine is performing more than one role, specially where there are no resource usage conflicts between each service. But all of them are usually performing some essential function and either some other system or business process depends on their availability.
A server has to be available and usually have reasonable response time. A server runs a fixed set of processes or services, and usually does not need to deal with unexpected situations like someone sending you a 120MB PowerPoint presentation that you need to print in colour 12 times.
The corollary is, your server memory usage should remain more or less constant. Except under situations where the workload exceeds the planned capacity, your server should be happily working and its resources (CPU, memory, disk) should be used fully. The essential, unforgettable word here is "fully" Fully means completely, but does not mean "in excess"
And "in excess" when talking about memory means swapping.
Memory usage is the key
Memory is the fastest resource available to the CPU. So don't waste it. Check the documentation that comes with your database. The most mature ones have configuration parameters that allow you to precisely tune the memory usage of the database, and be aware that some of them (for example, MySQL) do have different parameters for different storage engines. So your memory usage with MyISAM tables is radically different than with InnoDB tables (by the way, try not to mix the two in the same instance) As always with database tuning, measure, tweak a single parameter, measure again and continue until you reach the good enough state.
But above all, don't allow the database to swap. You'll find documentation that says that having a swap file or partition of at least double the RAM size a good thing. It does not mean that you should use that swap space. It's there just in case, not for regular use.
How much memory should I use?
You may hear from other people that if you're not using the swap file you're not fully using your memory. They are wrong, it's a common misconception that comes from the confusion between physical and virtual memory.You may hear that if your system is not using 100% of physical RAM you're not doing it right. You may hear that if your database is using more than 50% of physical RAM your're not doing it right.
Those are the kind of "myths" that I hate. It all depends on the database server, and even the storage engine used in the case of MySQL. There is no "right" answer that applies unversally.
Oh, yes, there is in fact an universal rule. Repeat with me, don't swap.