Technical Report (20 pages)
Postscript (1.67MB)
PDF (300KB)
Brian D. Davison
and
Chandrasekar Krishnan
The Simultaneous Proxy Evaluation (SPE) architecture provides one way to measure the performance of proxy caches. It includes the novel ability to compare prefetching proxy cache performance, but poses a number of implementation challenges. In this report we describe our prototype implementation of SPE, the Rutgers Online Proxy Evaluator (ROPE). We discuss a number of issues raised during development, describe validation tests, and demonstrate the use of our prototype in two experiments to simultaneously evaluate up to four publicly available proxy cache implementations. We measure bandwidth used and response latencies, but also discover unexpected caching bugs in two of the proxies tested.
Technical Report DCS-TR-445, Department of Computer Science, Rutgers University, August 2001.
In accordance with the terms of the Web Polygraph license under which we are permitted to publish Polygraph-generated results, we provide here (among other things) the raw logs from which we calculated performance as presented in our paper.
- Source codes.
While we do not recommend the use of these codes by others (since they are really too buggy for widespread use), we provide them for completeness. If you are really interested in our codes, please contact us by email to get the latest versions and instructions for their use.
- The version of the Multiplier we provide here is identical to what we used in the paper with the exception that this version uses threads, while the paper used one that forked on each request instead.
- We also provide the source code for the Collector, in the form of the files that we changed from squid-2.3 here.
- Two Perl scripts were used to analyze the logs created. The first, analyze-multiplier3.pl, analyzes the multiplier logs to calculate timing and other results. The second, squid-bw.pl, looks at the logs generated by the collector to calculate bandwidth used, bytes sent and retrieved per proxy.
- Logs.
- Synthetic workload. To generate a synthetic workload, we employed Polygraph to run five robots, each at one request per second. We used the standard "SimpleContent" Polygraph workload and "olcStatic" for the object life cycle. Object sizes were exponentially distributed with a 13kb mean, and were assigned to be cacheable 80% of the time. The log generated by the Multiplier for this dataset can be found here (1.4MB gzipped). Result of running analyze-multiplier3.pl on it is here (151KB gzipped). The log recorded by the Collector for this dataset can be found here (1.5MB gzipped).
- Reverse proxy workload. The log generated by the Multiplier for this dataset can be found here (6.4MB gzipped). Result of running analyze-multiplier3.pl on it is here (3.7MB gzipped).
Back to Brian Davison's publications