Data Prefetching

If making use of data sources from the Internet or the Cloud on a mobile device, by nature, the consumption of data is sensitive to interruptions: Even a connectivity loss of a few seconds will give the user a bad experience. Hence, it makes sense to prefetch data and buffer it locally in order to mask connection losses or bad connection quality (Higgins et al., 2012). Prefetching refers to the technique of getting or gathering any kind of data from other data sources before the moment that they are actually needed or used. It is related to caching, not only because caching and prefetching are often combined, but also because the prefetched data have to be stored in caches or similar modules. Similar to caching, prefetching can be used in many different domains and with many different goals. It can be used, for instance, for Web content, multimedia content, database queries, (Web) service responses, and more, while the reduction of the user-perceived latency, the handling of intermittent connectivity, and the optimization of energy consumption are all examples of different possible goals that it may have. Thus, different combinations of examined domains and goals have led researchers to the development of different prefetching approaches: (Padmanabhan & Mogul, 1996) is an early work for improving latency in the World Wide Web, (Fan et al., 1999), (Cao, 2002) and (Shen et al., 2005) examine the issue in the world of mobile computing with focus on bandwidth and energy consumption, respectively, while (Schreiber et al., 2010) present a solution for the prefetching of Web service responses. (Fitzek & Reisslein, 2001) and (Bagchi, 2011) present solutions that explicitly aim at media streaming.

The decision whether to prefetch depends on the context of the device and the user, respectively, and can depend on many different context parameters: Most importantly, the current and future location of the user, but also data from the mobile contract, etc. Furthermore, different types of data need to be taken into account, since the "freshness" of data varies. There is data that is fresh and valid for only a very short time span, e.g., live traffic or parking data. Other data, e.g., map data, changes very seldom. While the former data can only be prefetched for a short time span, the latter kind of data can be prefetched way earlier and for a longer time span. In the mobility domain, data prefetching can be used in several scenarios. For example, a user is in his car and drives towards a tunnel while he listens to his favourite online radio station. In this scenario, the navigation system is aware of the tunnel and could start to prefetch some songs from the radio station in advance, i.e. before the car enters the tunnel and the phone loses its data connection. In general, prefetching could be used to mitigate the influence of weak data connections. In a different scenario, a user plans to go abroad for a couple of days, but wants to use online information about the cities he is going to, such as geographic locations of Points of Interests (POIs) or a map of the cities. This information can be downloaded via his home Wi-Fi or via the hotel Wi-Fi in advance thus he can save money caused by data roaming.

At the core of almost every prefetching approach is a prediction algorithm, i.e., an algorithm that determines what data should be prefetched and when. Customised prediction algorithms make sense under different circumstances and the same algorithms perform differently for different scenarios. Some prediction algorithms may be based only on recent observations, others keep long historical data and corresponding conditional probability tables, others build trees to represent user behaviours etc. The information that is exploited by the prediction algorithms is case-specific. A related survey by (Hartmann & Schreiber, 2007) describes and compares six prediction algorithms; another comparison and evaluation is given by (Domènech et al., 2007). Complete and specific prefetching approaches/solutions (e.g., (Padmanabhan & Mogul, 1996), (Fan et al., 1999), (Cao, 2002), (Schreiber et al., 2010), (Schreiber et al., 2011)) together with more abstract and general-purpose prediction algorithms comprise the current state of the art in general data prefetching. The current state of the art in mobile prefetching is defined by a work by (Higgins et al., 2012), who take into account performance gain by prefetching, energy usage, and data consumption in their prefetching decision algorithm. (Parate et al., 2013) provide a prefetching approach for mobile apps, which is based on an estimation of the app(s) to be used next. Based on this prediction, data for these apps is prefetched. Context information like the future location of the user is not explicitly taken into account. However, the authors apply coarse-grained location information, e.g., "at home", "workplace".

Prefetching algorithms which explicitly take into account aspects from the mobility domain, e.g., determination of bad cellular coverage from map data (e.g., in tunnels), taking into account limited download capacities, or the fact that a user does not want to pay roaming fees, have not been discussed a lot so far. One notable exception by (Hummer et al., 2014) has been implemented as part of the SIMPLI-CITY project. In this solution, data prefetching is done for mobile users, taking into account the context under which the consumer is currently executing, including time, location, and projected route. Based on projections for network quality at future locations, a data prefetching approach aiming at continuous Quality of Experience (QoE) is provided. The algorithm differentiates between different kinds of data, categorized by its importance, time criticality, access pattern, and possible prefetching strategies.


References and Further Reading



[1]
H. Shen, M. Kumar, S. K. Das, and Z. Wang, “Energy-efficient data caching and prefetching for mobile devices based on utility,” Mobile Networks and Applications, pp. 475–486, 2005.

[2]
W. Hummer, S. Schulte, P. Hoenisch, and S. Dustdar , “Context-Aware Data Prefetching in Mobile Service Environments,” presented at the The 4th IEEE International Conference on Big Data and Cloud Computing (BDCloud 2014), 2014, pp. 214–221.

[3]
J. Domènech, A. Pont, J. Sahuquillo, and J. A. Gil, “A user-focused evaluation of web prefetching algorithms,” Computer communications, vol. 30, no. 10, pp. 221–2224, 2007.

[4]
A. Parate, M. Böhmer, D. Chu, D. Ganesan, and B. M. Marlin, “Practical prediction and prefetch for faster access to applications on mobile phones,” in The 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13), 2013, pp. 275–284.

[5]
S. Bagchi, “A Fuzzy Algorithm for Dynamically Adaptive Multimedia Streaming,” ACM Transactions on Multimedia Computing, vol. 7, no. 2, pp. 11:1–11:26, 2011.

[6]
F. H. P. Fitzek and M. Reisslein, “A prefetching protocol for continuous media streaming in wireless environments,” IEEE Journal on Selected Areas in Communications, vol. 19, no. 10, pp. 2015–2028, 2001.

[7]
D. Schreiber, A. Göb, E. Aitenbichler, and M. Mühlhäuser, “Reducing User Perceived Latency with a Proactive Prefetching Middleware for Mobile SOA Access,” International Journal of Web Service Research, vol. 8, no. 1, pp. 68–85, 2011.

[8]
B. D. Higgins, J. Flinn, T. J. Giuli, B. Noble, C. Peplin, and D. Watson, “Informed Mobile Prefetching,” in The 10th International Conference on Mobile Systems, Applications, and Services (MobiSys’12), 2012, pp. 155–168.

[9]
M. Hartmann and D. Schreiber, “Prediction algorithms for user actions,” in Proceedings of Lernen Wissen Adaption, ABIS, 2007, pp. 349–354.

[10]
D. Schreiber, E. Aitenbichler, A. Goeb, and M. Mühlhäuser, “Reducing User Perceived Latency in Mobile Processes,” in 2010 IEEE International Conference on Web Services (ICWS), 2010, pp. 235–242.

[11]
G. Cao, “Proactive power-aware cache management for mobile computing systems,” IEEE Transactions on Computers, vol. 51, no. 6, pp. 608–621, 2002.

[12]
L. Fan, P. Cao, W. Lin, and Q. Jacobson, “Web prefetching between low-bandwidth clients and proxies: potential and performance,” in ACM SIGMETRICS Performance Evaluation Review, 1999, vol. 27, pp. 178–187.

[13]
V. N. Padmanabhan and J. C. Mogul, “Using predictive prefetching to improve World Wide Web latency,” ACM SIGCOMM Computer Communication Review, vol. 26, no. 3, pp. 22–36, 1996.