Thinking about script memory in LSL is very important. In Mono, which is the modern script engine, each script gets 64K of memory allocated by default. This is true even if the script uses only a few kilobytes of memory.
With thousands of scripts in a region, this can easily consume all the memory allocated to the region, and cause the region to start swapping memory out to disk (this all happens at the Linden Lab server running the region). Swapping is a Bad Thing™ and will drastically slow down the sim, which you’ll see as lag.
So, what as script creators can we do? If you have a script that takes no external input, you can limit the amount of memory your script will be allocated with a llSetMemoryLimit () function call.
Firstly, what do I mean by “takes no external input”? An example of something that takes external input is something like a picture frame that displays the owner’s selection of textures. The owner can drag textures into the frame and display them. Typically to do this, the script will load the names of the textures into a list, and therefore the amount of memory used by the list is out of the control of the script creator. It depends on how many textures the owner loads into the frame’s contents, and not how the script creator codes the script.
So, if you have something that doesn’t allocate memory out of your control, how do you know what upper limit to set with your call to llSetMemoryLimit ()? There’s a set of memory profiling calls that will tell you. And, as you may have guessed, this post was prompted by someone triggering the swapping problem on my home sim, which made me see if I could drop the memory that my scripts allocate.
To make things easy, I created an include file for my scripts. Let me show you an example of how I use it. You can access the included files by following the link in the comments next to them.
#define DEBUG
#include "debug.lsl" // See the source of debug.lsl
#define MEMLIMIT 8000
#define PROFILE
#include "profile.lsl" // See the source of profile.lsl
#define BLUE "c3623b1f-db83-4003-bb6d-d0d60d32c621"
phantom (integer p) {
llSetLinkPrimitiveParamsFast (LINK_THIS, [PRIM_PHANTOM, p]);
}
default {
state_entry () {
init_profile ();
llCollisionFilter ("", BLUE, TRUE);
phantom (FALSE);
}
collision_start (integer n) {
phantom (TRUE);
llSetTimerEvent (2.0);
}
timer () {
llSetTimerEvent (0.0);
phantom (FALSE);
show_profile ("timer");
}
}
This is a simple script that goes in an invisible barrier. It lets me walk through it by turning the barrier phantom when I collide with it, but for everyone else, it does nothing. It takes no external input, all the memory it will use is its code, it doesn’t even use any variables! So this is a perfect candidate for limiting script memory allocation.
The code at the top includes my standard debugging code, and the
#define MEMLIMIT 8000
#define PROFILE
#include "profile.lsl"
defines an initial memory limit of 8000 bytes, tells profile.lsl to enable profiling, and includes the profiling code.
Now notice the calls to init_profile () and show_profile (). The init_profile () call sets the script’s memory limit based on the MEMLIMIT definition, and starts memory profiling if PROFILE is defined (and, just a warning here, memory profiling can drastically slow down your script so make sure you turn it off when done!)
The show_profile () call displays the current maximum amount of memory the script has used. It can be tricky to figure out where to put this call, but in this case, it’s easy, as the script flow first sets everything up and then waits for a collision by me, followed by the timer firing. So the logical place to put the show_profile () call is at the end of the timer. All the code has run by this stage.
When you save this, and collide with it (if you’re trying this, don’t forget to replace my UUID with yours!), you’ll see something similar to the following…
Barrier: Memory profile init: 0
Barrier: Memory profile show (timer): 7598
This tells us that at the end of the timer event, the most memory the script has ever used is 7598 bytes, so our starting figure of 8000 was close. If you see
Barrier: MEMLIMIT = 8000 too small
increase the size of MEMLIMIT until this message goes away.
I will leave a little buffer between the size reported and the limit, just in case 🙂 So in my example, I left MEMLIMIT at 8000. Doing this simple exercise saved over 56K! That may not sound like a lot, but don’t forget there can be thousands of scripts running in a region, so it adds up.
Once you’ve figured out a good number for MEMLIMIT, don’t forget to turn off debugging and profiling by changing the #define DEBUG and #define PROFILE to #undef DEBUG and #undef PROFILE, and leave the init_profile () and show_profile () calls in place. The init_profile () call just sets the memory limit when PROFILE is undefined (and the show_profile () call is replaced by a blank line of code).
If every script creator did this, we’d be living much less laggy second lives!