Contextual sensing using wearable cameras has seen a variety of different camera angles proposed to capture a wide gamut of different visual scenes. In this paper, we propose a new camera view that aims to capture the same visual information as many of the camera positions and orientations combined from a single camera view point. The camera, mounted on the corner of a glasses frame is pointing downwards towards the floor, a field-of-view we named Personal Activity Radius (PAR). The PAR field-of-view captures the visual information around a wearer's personal bubble, including items they interact with, their body motion, their surrounding environment, etc. In our evaluation, we tested the PAR view's interpretability by human labelers in two different activity tracking scenarios: food related behaviors and exercise tracking. Human labelers achieved an overall high level of precision in identifying body motions in exercise tracking of 91% precision and eating/drinking motions at 96% precision. Item interaction identification reached a precision of 86% precision for labeling grocery categories. We show a high level on the device setup and contextual views we were able to capture with the device. We see that the camera wide angle captures different activities such as driving, shopping, gym exercises, walking and eating and can observe the specific interaction item of the user as well as the immediate contextual surrounding.