Solved Do you really need Virtual Memory?

alkaufmann

New Member
Member
Messages
132
I just built myself a new system and with 16GB of memory, I have turned off the virtual memory. It seems to me that real memory is faster than swapping things back and forth from a hard drive and it should save wear and tear on the hard drive. Maybe the operating system only uses the pagefile if it runs out of real memory?

Ak
 
I just built myself a new system and with 16GB of memory, I have turned off the virtual memory. It seems to me that real memory is faster than swapping things back and forth from a hard drive and it should save wear and tear on the hard drive. Maybe the operating system only uses the pagefile if it runs out of real memory?

Ak

You can turn off virtual memory with 16Gb installed, but at some point your PC may crash if it needs more than 16GB (HD video editing, 3D design software, games, concurrent VMs, etc...), or if a program needs to specifically use the page file, so I would never turn it off to be fair. I would set initial size to 1GB and maximum to 4GB in your case.

:)
 
It's dangerous to shut it off, Logically, I can see why thinking 16 GB would cover the needs of the system, but think about it, most programs use way more than that in overall swap space, the swap apace is just for the stuff that is ready to be crammed into the processor. I only have 2 GB of Ram, my system has set 5 GB in swap space. When working right, your system will determine how much you need and set it accordingly- But 1 to 4 Gig sounds good to go with 16Gb Ram. How big is that space with 16 GB in?

Maybe one of us should figure out a way to use a 500 GB Hard Drive as RAM - Could that be done? Could an empty drive be hooked up and set to use it's full space as Virtual Memory?
 
The problem here is that you really don't understand what virtual memory is, how it functions, and how Windows uses it to its advantage.

More than likely, you think virtual memory is when windows takes memory and swaps it to disk. While this is true, it's only part of the story. Windows only swaps things to disk that are not in use for a long time. And it only does so with dynamic memory (that is, memory allocated by the application). It does not do this for the applications themselves, since windows uses a technique called "demand paged executables". This means that Windows maps the physical disk image of the application into virtual memory, and it merely discards memory pages it doesn't use, then it uses the same swap mechanism to load them back into memory if it ever needs them.

The purpose, however, of using virtual memory, even with a lot of physical memory, is that Windows can free up memory that is never used in order to use it for more important things, such as disk cache, or system cache (which also includes caching network resources). Windows throws away or swaps things you seldom use in order to make more room for things you are ACTUALLY using.

So by turning off swap, you are actually reducing the performance of your system, because it will have fewer caches to store things. Memory is there to be used, not sit idle doing nothing. There is no reason not to use memory that is doing nothing for things that are.
 
I seem to recall that VM contains crash data, which would be important to resolve problems.

I just set it to default to a little over the minimum recommended, but with the option to expand as needed.
 
In support of "Ak" (thread starter), and Mystere > excellent concise description of page file and/or swap memory usage.

Since the advanced system settings makes it optional, a.k.a. setting to zero, or no paging file, I guess we can assume it's an acceptable option. It may not be, but it is there. I have never used a dump file. I have not had a computer crash since 1998. I have not used a page file in years and have had no problems and have seen no performance degradation. My WEI scores (probably irrelevant) are all high end. My computer boots in seconds to the start screen, and there is no delay in launching anything. But then again, I am not creating fractals, or running pi calculations, benchmarking loads, do not do virtual anything, and do not play games, or anything cpu intensive. (except for Microsoft Truck Madness 2) So, a quick Acronis image takes 2 to 3 minutes in full disk mode and it's done. Incrementals take less than a minute.
 
There are also some very good posts on this:

windows server 2008 - Any benefit or detriment from removing a pagefile on an 8GB RAM machine? - Server Fault
Pushing the Limits of Windows: Virtual Memory - Mark's Blog - Site Home - TechNet Blogs

The long and short of it is this. If you want to invest the time into learning all about virtual memory, how it works, what it does, etc.. and after doing all that and knowing all the risks and repercussions, then by all means, do whatever you want.

However, if you really don't have an interest in learning all the details, I would suggest leaving it at the Windows defined default. You may never have a problem, but if you do.. you won't know how to deal with it, or even understand why its happening. For instance, if you have a blue screen, your computer will just magically reboot.. and you'll be left wondering why that is, and you'll spend hours, if not days posting to boards and nobody will know what your problem is because you won't have connected it enough to tell them.

I'm never one to tell people not to experiment. By all means do so, but if you do, you have to be willing to accept that things can get messed up. And you may be stuck reinstalling everything. If that's not something you want to do, then I would suggest leaving your system in its default settings.
 
Lets say I set up a maximum 4GB of virtual memory and I have 16GB of actual ram, what would happen if an application requires 30GB of ram? Would I get the blue screen of death with some system code or would the application tell me I do not have enough memory?

Thanks for trying to explain it to me but I still don't see how using virtual memory is better than using real memory that is doing nothing. I do keep track of how much memory my system is using at all times.

Ak
 
Are you going to be running any applications that normally use 30GB of RAM?

And the OS may be laggy for a moment as it approaches max capacity, then it'll kill the app.
 
I was just using that 30GB as an example. I suspect that if an application requests more memory than what is available, the system would not crash and neither should the application.

One of the reasons I got more memory than what I would ever need is because I have set up a ram drive. It makes for a very speedy machine. :cool:

Ak
 
If you're setting up a RAM drive, then you're taking away your extra memory.. I would definitely re-enable virtual memory in that case.
 
There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.

Interesting part of the article !

ram drive

It's another point here !

:)
 
I just built myself a new system and with 16GB of memory, I have turned off the virtual memory. It seems to me that real memory is faster than swapping things back and forth from a hard drive and it should save wear and tear on the hard drive. Maybe the operating system only uses the pagefile if it runs out of real memory?

Virtual memory isn't something that can be turned on or off. It is a central concept in Windows memory management, as important to how the OS operates as a heart and lungs are to you and I.

Virtual memory is a complex system involving the CPU and OS. This system provides to each process a private virtual environment with an address space of 2 GB (in a 32 bit OS) in which code and data is stored. Note that the size of this address space is completely independent of RAM size. To a process this is what memory is. A process knows nothing of how much RAM is in the system, where it is, or how it is being used. RAM is managed by the system memory manager and it's operation is completely transparent to an application. A process can learn some of these detains but few do, it being little more than useless trivia.

This concept of virtual memory is nothing new, being used in every Windows server and desktop OS for some 20 years. And it has been used in large computer systems since the 1960s. Linux and Mac OS X follow the same principles, differing only in the details.

A virtual memory system offers many advantages to both developers and computer users alike. Modern operating systems wouldn't have near the capabilities they have without it. Unless the system is under severe memory pressure the performance is very good.

Unfortunately, Microsoft has done little to educate the public about this. In fact, many computer professionals lack even a basic understanding. In much user level documentation it is described as "using a file on the hard disk as if it were RAM". A serious misrepresentation if ever there was one, and one that been the cause of much unwarranted criticism of how Windows manages memory.

The pagefile is a small but important part of the virtual memory system. It's purpose is to optimize the operation of the system and it generally works very well. It works by providing a place where the memory manager can store rarely used data, thus relieving RAM of this burden and allowing it to do what it does best.

Unless you have a specific need and you understand what you are doing (and you can't learn this by reading a few forum posts) it is best to leave pagefile configuration on default settings. Many people have disabled the pagefile in the belief they are benefiting performance. In most cases they are only fooling themselves.
 
Virtual memory isn't something that can be turned on or off. It is a central concept in Windows memory management, as important to how the OS operates as a heart and lungs are to you and I.

He's talking about swapfile size. Yes, Virtual Memory is a very overloaded term, I've given up trying to educate people on that.
 
Well with 16 GB of Ram, try it. Then Open a Blue Ray Movie if you have one on disk, see what happens when you try to play it. Or, if you have a BR Player on your system. If the movie file is 30 GB or larger, will it load all of it into VLC or whatever Video Player you use? Or will it just load the Video and Audio as needed by the Program?

If you are running a program that LOADS a 30-GB file, it may choke on that without the Virtual Memory. But you can try it out and see what happens. I'd be eager to see how the system runs without Virtual Memory.
 
I just built myself a new system and with 16GB of memory, I have turned off the virtual memory. It seems to me that real memory is faster than swapping things back and forth from a hard drive and it should save wear and tear on the hard drive. Maybe the operating system only uses the pagefile if it runs out of real memory?
Ak

Leave it as default, really, its been discussed for ages, unless you use a very short capacity ssd and you're very low on space to hold the pagefile, dont mess with it.
 
Lets say I set up a maximum 4GB of virtual memory and I have 16GB of actual ram, what would happen if an application requires 30GB of ram? Would I get the blue screen of death with some system code or would the application tell me I do not have enough memory?

The attempted allocation would fail, and the program making the attempt would react however it reacts. Well-written programs would handle the issue gracefully. Less well-written programs would tend to crash immediately as they try to access memory through a null pointer. Terribly written programs would run for a while longer and corrupt something totally unrelated.
 
The attempted allocation would fail, and the program making the attempt would react however it reacts. Well-written programs would handle the issue gracefully. Less well-written programs would tend to crash immediately as they try to access memory through a null pointer. Terribly written programs would run for a while longer and corrupt something totally unrelated.

Almost no application can be well written enough to deal with out of memory conditions, because there are simply way too many uses of memory in the system.. an app has to be written very specially to be able to survive this, and typically is only done in real-time systems.
 
Almost no application can be well written enough to deal with out of memory conditions, because there are simply way too many uses of memory in the system.. an app has to be written very specially to be able to survive this, and typically is only done in real-time systems.

It does depend on the nature of the failed allocation and what you mean by "deal with". A program that attempts to allocate a huge array and can't handle it gracefully is a terribly written program, period. As for smaller allocations that are expected to never fail, in a C++ program, a failed new would throw an exception, the stack would be unwound, destructors for local objects would run, and the program would remain in a well-defined state. You do need to avoid dynamic memory allocation in your recovery code, and while that can be subtle, recovery code is normally about releasing resources, not acquiring new ones. For some programs, terminating immediately might be an acceptable "graceful" response. What should not be done is to pretend memory allocation cannot fail and to continue on blithely after it does such that the problem manifests in code possibly far removed from the allocation, such that memory allocation failure is indistinguishable from a program bug.

That said, if normal small allocations start to fail in a virtual memory system, the system as a whole will likely have become unusable due to pagefile thrashing, which is something users learn to avoid and deal with by terminating programs before it happens. So "running out of memory" isn't all that common a problem in a virtual memory system, and when it does happen, you are at the mercy of every component in the OS handling it gracefully. That can be an issue.
 
Sorry, but I don't consider shutting down like that to be "graceful". For instance, you may lose your current document because memory is in an unstable state. You may not be able to open a file save dialog, or even write a dump file to disk. You can't really put a try/catch around everything that could possibly allocate some memory. It would make your code virtually unreadable, and difficult to maintain.

And exiting with an unhandled exception is hardly graceful, even if it does clean up after itself as much as it can... Basically, most experts say it's fruitless to check most memory allocations. Yes, large allocations can be beneficial, such as knowing if you can load a large bitmap into memory... but that kind of memory failure is different from a memory exhaustion situation.
 
Back
Top