When accessing a large data file in Silk Performer you may encounter that the Memory Consumption will increase exponentially throughout the duration of the LoadTest for each "PerfRun.exe" process. This has the effect that if the LoadTest continues for a significant enough time period the memory consumption for each "perfrun.exe" process will finally increase until it has reached the maximum threshold and this will cause the LoadTest to fail. It is important to note that the behaviour described above is not an indication of a memory leak in Silk Performer; it is however instead an indication that the "process space" value is increasing and that it may exceed the finite "Process Space" threshold thus causing the process to hang. Alternatively if you wish to identify a memory leak then you should look at the measure/column "VM Size". Process space is equal to the "Virtual Bytes" in Perfmon; it is the current size in bytes of the virtual address space the process is using, and the problem is that this virtual space is finite number. The definition of "Mem Usage" in Task Manager is the same as Working Set in NT Perfmon, which is described as follows: Working Set is the current number of bytes in the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before they leave main memory. The problem in this type of scenario is that the the BDF script being executed overfills the process space (i.e. the working set of the process increases until it reaches the threshold); this can be due to the size of the data file being read into memory and/or due to the use of Global Variables used for retrieval of the data. Therefore to prevent this type of problem from occurring we would generally advise end user not to use significantly large data files. However if this is unavoidable then all users will need to take the System Limitation described above into account and as such we would advise that the virtual users are distributed over a greater number of Silk Performer Agents and that the "virtual users per process" setting is configured so that an acceptable number of VUsers are sharing the replay engine process (PerfRun.exe). To calculate the system limitations for the "PerfRun.exe" process you should first note the number of virtual users per process which is set in "System Settings | Workbench | Control" and then implement the formula below: virtual memory usage per process (i.e. "Mem Usage" in Task Manager) = Data file Size (80 MBytes) x "virtual user per process" So in this scenario it safe to specify "10 virtual users per process" (10 x 80 Mbytes = 800 Mbytes of virtual mem usage) as the System Limitation for a process is approximate 2GB on a Windows Operating System. For example if we specified 50 vusers per process, each vuser maps would map the data file into the process space and one single perfrun process would need to hold 50 * 80MB = 4GB, which significantly exceeds the 2GB Windows threshold. Therefore to conclude the solution in this example is to uncheck automatic calculation option in the virtual users group box and set the "virtual users per process" value to 10. This should mean that in any tests run afterwards the "Mem Usage" column in task manager will will increase for each PerfRun process to a maximum 800 MBytes or below and stay stable according to reading file progress for each virtual user running in that process. Old KB# 17420
↧