We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called Food in context, we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called Foods "in the wild" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.