Host | Intel(R) Core(TM) i3-4100U CPU @ 1.80GHz | 8GB RAM | 2 cores (1 essentially unused) |
---|---|---|---|
UCP/AM | Intel Core Processor (Haswell) (kvm) | 1.5GB RAM | 1 core (shared with PM) |
PM | Intel Xeon E312xx (Sandy Bridge) (kvm) | 1GB RAM | 1 core (shared with UCP) |
Load was generated using 2 hosts:
saturn3 | 500 PAX devices and data logging | Intel(R) Core(TM) i7-7560U CPU @ 2.40GHz - 16GB |
---|---|---|
loadtest1 | 150 PAX devices | Intel(R) Core(TM) i3-4030U CPU @ 1.90GHz - 8GB |
Load was generated in two groups started simultaneously:
Group 1 | 150 devices - spaced at 6 second intervals | loadtest1 |
---|---|---|
Group 2 | 500 devices - spaced at 3 second intervals | saturn3 |
Each PAX is simulated using phantomjs running a script that:
The PM VM has not consumed the entire 1GB allocated to it. The graphs show it slowly approaching this point, but during this load test it only reaches about 800MB. The PM memory usage (inside the VM) indicates that this isn't really 'real' memory use. The PM is using and freeing memory, but the KVM on the UCP is not re-claiming it. At the end of the run only 370MB is used. (note there are several bash prompts consuming quite a bit of memory not shown. These were used for monitoring during the test run.) Presumably it will level off at 1GB. I could not find a way to force KVM to pre-allocate all of a VM's memory. This is apparently supported in newer verions of KVM than we are running. The long duration testing should show it finally topping out at 1GB. Note that during the whole run the PM's python Manager grows only to 32MB and then levels off. The AM's Manager is approximately 20MB (the PM python has more code included to handle the passenger functions).
The passenger devices are added very quickly, this results in the very spiky CPU loading seen particularly in the PM. As more and more devices are added, the PM lags behind in processing the iptables and PaxUpdate messages that the ground system is sending. As the first group of devices finish up (at around 2:40) the load peaks and then drops back down as the device addition rate drops. Additionally the high load causes the snmpd daemon to be quite active. This SNMP activity also causes the python process to do more work, as it is an Agent-X client.
Devices have a ~10 minute idle timeout time after the Simulated Pax device stops loading content. The devices then are idle-timed-out by the ground system. The ground system is doing this in blocks, which cause the staggered decrease in devices seen on the graph. The ground system is not optimized to remove large numbers of devices 'all at once' like this (it uses a different method to clean up at the end of the flight) so it takes longer to remove these simulated devices than it will on a real aircraft.