Policy enabled caching for distributed AI
Web Caching has established itself as a key enabling technology within the Internet. It enables efficient browsing of websites and web-based services on networks that are bandwidth constrained. However, similar techniques are not available for AI based solutions. Many AI solutions are based on deep neural networks or similar approaches which require creation of machine learning models trained with huge amounts of data. Such models are best created in centralized locations with significant processing power. In many environments, sending the data to a centralized location is infeasible or undesirable. A judicious combination of ideas borrowed from web-caching paradigm, with ideas from AI and machine learning can provide an effective solution for exploitation of deep learning models in bandwidth constrained environments. Allowing such caches to generate their own policies using a generative policy approach can enable the creation of a generic edge caching system which can be used with a wide variety of backend AI systems.